CN110995994B - Image shooting method and related device - Google Patents

Image shooting method and related device Download PDF

Info

Publication number
CN110995994B
CN110995994B CN201911253936.4A CN201911253936A CN110995994B CN 110995994 B CN110995994 B CN 110995994B CN 201911253936 A CN201911253936 A CN 201911253936A CN 110995994 B CN110995994 B CN 110995994B
Authority
CN
China
Prior art keywords
camera application
module
party
camera
hardware abstraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911253936.4A
Other languages
Chinese (zh)
Other versions
CN110995994A (en
Inventor
陈岩
方攀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinsheng Communication Technology Co ltd
Original Assignee
Shanghai Jinsheng Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinsheng Communication Technology Co ltd filed Critical Shanghai Jinsheng Communication Technology Co ltd
Priority to CN201911253936.4A priority Critical patent/CN110995994B/en
Publication of CN110995994A publication Critical patent/CN110995994A/en
Application granted granted Critical
Publication of CN110995994B publication Critical patent/CN110995994B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/617Upgrading or updating of programs or applications for camera control

Abstract

The embodiment of the application discloses an image shooting method and a related device, wherein the method comprises the following steps: the third-party camera application sends a first preview image and a first data request to a hardware abstraction layer of an operating system; when receiving a first preview image and a first data request sent by a third-party camera application, a hardware abstraction layer calls an interest point identification algorithm to process the first preview image to obtain a target area and sends target area information to the third-party camera application, wherein the interest point identification algorithm is opened by the third-party camera application through a media service module request operating system for the third-party camera application; and the third-party camera application receives the target area information sent by the hardware abstraction layer and adjusts the camera parameters according to the target area information to acquire the target image. According to the method and the device, the bottom layer is opened for the interest point identification algorithm applied by the third-party camera, the image processing speed is favorably improved, and the image shooting effect is enhanced.

Description

Image shooting method and related device
Technical Field
The present application relates to the field of electronic devices, and in particular, to an image capturing method and related apparatus.
Background
With the popularization of electronic devices, various camera application software is widely used, and with the increasing requirement of users for camera application data processing, third-party application software needs to use stronger enhanced functions or images processed through algorithms to meet the requirements of the users.
At present, the algorithm of the third-party application software is used in algorithm processing, when the image processing is performed to extract interest points, the power consumption of the system is increased, the occupation of system resources becomes very high, the processing speed of the system is slowed down, and further, the processing of each frame of image consumes much time, so that the problems of system jam, heating and the like are caused.
Disclosure of Invention
The embodiment of the application provides an image shooting method and a related device, which aim to open an interest point identification algorithm applied by a third-party camera by a bottom layer, and are beneficial to improving the image processing speed and enhancing the image shooting effect.
In a first aspect, an embodiment of the present application provides an image capturing method, which is applied to an electronic device, where the electronic device includes an operating system and a media service module, an application layer of the operating system is provided with a third-party camera application, and the method includes:
the third party camera application sending a first preview image and a first data request to a hardware abstraction layer of the operating system;
when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application, calling an interest point identification algorithm to process the first preview image to obtain a target area, and sending target area information to the third-party camera application, wherein the interest point identification algorithm is opened by the third-party camera application through the media service module to request an operating system to the third-party camera application;
and the third-party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera parameters according to the target area information to acquire a target image.
In a second aspect, an embodiment of the present application provides an image capturing apparatus, which is applied to an electronic device, where the electronic device includes an operating system and a media service module, an application layer of the operating system is provided with a third-party camera application, the apparatus includes a processing unit and a communication unit, where:
the processing unit is used for sending a first preview image and a first data request to a hardware abstraction layer of the operating system by the third-party camera application through the communication unit; when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application through the communication unit, calling an interest point identification algorithm to process the first preview image to obtain a target area, and sending target area information to the third-party camera application through the communication unit, wherein the interest point identification algorithm is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module; and the third-party camera application receives the target area information sent by the hardware abstraction layer through the communication unit and adjusts camera parameters according to the target area information to acquire a target image.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a chip, including: and the processor is used for calling and running the computer program from the memory so that the device provided with the chip executes part or all of the steps described in any method of the first aspect of the embodiment of the application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in the embodiment of the present application, a third party camera application in an electronic device sends a first preview image and a first data request to a hardware abstraction layer of an operating system, and then, when receiving the first preview image and the first data request sent by the third party camera application, the hardware abstraction layer invokes a point of interest recognition algorithm to process the first preview image to obtain a target area and sends target area information to the third party camera application, where the point of interest recognition algorithm is that the third party camera application requests an operating system to be open for the third party camera application through a media service module, and then, the third party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera parameters according to the target area information, to acquire a target image. Therefore, the third-party camera application can call the underlying interest point identification algorithm through the media service module to perform algorithm processing on the first preview image acquired by the third-party camera application, the processing speed of the third-party camera application slowing down the system is avoided, meanwhile, the image processing speed is improved, in addition, the image shooting effect is enhanced by using the interest point identification algorithm, and the user experience is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image capturing method provided in an embodiment of the present application;
FIG. 3 is a schematic flowchart of another image capturing method provided in the embodiments of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a block diagram of functional units of an image capturing apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on.
Currently, with the Android platform, third party camera applications can now access the underlying camera data through a standard Android Application Programming Interface (API), however, if the underlying image is to be processed with more enhanced functions or algorithms, there is no corresponding standard interface to map the underlying capabilities to three-party access, the third party camera application software therefore uses the algorithm of the third party camera application itself in the algorithm processing, for example, when the image processing is performed to extract the interest points, the power consumption of the system is increased, the occupation of system resources becomes high, the processing speed of the system is slowed, further, the processing of each frame of image takes a lot of time, resulting in problems of system stutter, heat generation, etc., but if the processing effect of the image is reduced, although the problem of system seizure heat is alleviated, the use experience for the user is reduced. Therefore, the current scheme cannot well achieve balance between the use effect and the system performance, and the strategy optimization of the three-party camera based on the interest points is limited.
In view of the foregoing problems, embodiments of the present application provide an image capturing method and a related apparatus, and the following describes embodiments of the present application in detail with reference to the accompanying drawings.
As shown in fig. 1, an electronic device 100 according to an embodiment of the present invention includes a media platform and an operating system, where the operating system may be, for example, an android system, and the media platform includes, without limitation, a media service module, a media management module (also referred to as a media interface module), a media policy module, and an algorithm management module, the media service module is disposed independently from the operating system, an application layer of the operating system is provided with a third-party application (including a third-party camera application) and a media management module (also referred to as a media interface module), a hardware abstraction layer of the operating system is provided with a hardware abstraction module (this is an android native module, such as a native camera hardware abstraction module CameraHAL), a media policy module, and an algorithm management module, and the operating system native architecture further includes a framework layer and a driver layer, and the framework layer includes application interfaces (such as native camera application program interfaces) of various native applications, An application service (such as native camera service), a frame layer interface (such as Google HAL3 interface), a hardware abstraction layer including a hardware abstraction layer interface (such as HAL3.0), hardware abstraction modules of various native applications (such as camera hardware abstraction modules), and a driver layer including various drivers (such as screen Display driver, Audio driver, etc.) for enabling various hardware of the electronic device, such as an image signal processor ISP + sensor.
The third-party application can communicate with the media service module through the media management module, the media service module can communicate with the media policy module through an android native information link formed by the application interface, the application service, the frame layer interface, the hardware abstraction layer interface and the camera hardware abstraction module, the media policy module communicates with the algorithm management module, the algorithm management module maintains an android native algorithm library, the algorithm library comprises enhancement functions supported by various native applications, and for example, for a native camera application, the enhancement functions of interest point identification, binocular shooting, beauty, sharpening, night scene and the like are supported. In addition, the media service module can also directly communicate with the media policy module or the algorithm management module.
Based on the above framework, the media service module may enable the algorithm module in the algorithm library through the android native information link, the media policy module, and the algorithm management module, or enable the algorithm module in the algorithm library directly through the media policy module and the algorithm management module, or enable the algorithm module in the algorithm library directly through the algorithm management module, thereby implementing an enhanced function of opening native application association for third-party applications.
Based on the above framework, the media service module may invoke the driver of the application to enable some hardware through an android native information link, or through a first information link composed of the media policy module and the camera hardware abstraction module, or through a second information link composed of the media policy module, the algorithm management module, and the camera hardware abstraction module, thereby implementing opening native application-related hardware for a third party application.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image capturing method according to an embodiment of the present disclosure, where the image capturing method can be applied to the electronic device shown in fig. 1.
As shown in the drawing, the present image capturing method includes the following operations.
S201, a third party camera application sends a first preview image and a first data request to a hardware abstraction layer of an operating system;
s202, when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application, calling an interest point identification algorithm to process the first preview image to obtain a target area, and sending target area information to the third-party camera application, wherein the interest point identification algorithm is opened by the third-party camera application through the media service module to request an operating system to the third-party camera application;
the camera hardware abstraction module in the hardware abstraction layer may call an interest point identification algorithm in the algorithm module through the media policy module, where the interest point identification algorithm is stored in the algorithm management module, for example, may be an eye tracking algorithm, and determine, according to the eye tracking algorithm, a point of interest, i.e., an interest point, of a user for a first preview image, and the target area is the interest point area.
The target area information may be coordinate information, for example, coordinate information of the point of interest area on the first preview image, or may be coordinate information of the point of interest area on the display screen, which is not limited herein.
S203, the third party camera application receives the target area information sent by the hardware abstraction layer, and adjusts camera parameters according to the target area information to obtain a target image;
wherein, adjusting the camera parameters according to the target area information for highlighting the target area, the camera parameters may include: an Automatic Exposure parameter (AE), an Auto Focus parameter (AF), an Automatic White Balance (AWB), and the like, but are not limited thereto.
The adjustment of the camera parameters may also adjust a focusing position, or may adjust the size of the target area through a zooming process, and the like, which is not limited herein.
In a specific implementation, the third-party camera application may determine a first camera parameter adjustment policy according to the target area, and perform camera parameter adjustment according to the first camera parameter adjustment policy, or may send target area information to the algorithm management module under some specific conditions (for example, under a condition that the system resource occupancy rate is high), and the algorithm management module determines a second camera parameter adjustment policy, and the third-party camera application adjusts the camera parameters according to the second camera parameter adjustment policy, which is not limited herein.
It can be seen that, in the embodiment of the present application, a third party camera application in an electronic device sends a first preview image and a first data request to a hardware abstraction layer of an operating system, and then, when receiving the first preview image and the first data request sent by the third party camera application, the hardware abstraction layer invokes a point of interest recognition algorithm to process the first preview image to obtain a target area and sends target area information to the third party camera application, where the point of interest recognition algorithm is that the third party camera application requests an operating system to be open for the third party camera application through a media service module, and then, the third party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera parameters according to the target area information, to acquire a target image. Therefore, the third-party camera application can call the underlying interest point identification algorithm through the media service module to perform algorithm processing on the first preview image acquired by the third-party camera application, the processing speed of the third-party camera application slowing down the system is avoided, meanwhile, the image processing speed is improved, in addition, the image shooting effect is enhanced by using the interest point identification algorithm, and the user experience is improved.
In one possible example, the third-party camera application receives the target area information sent by the hardware abstraction layer, and adjusts camera parameters according to the target area information to obtain a target image, including:
the third-party camera application receives the target area information sent by the hardware abstraction layer, adjusts camera parameters according to the target area information and obtains a second preview image;
the third party camera application sending the second preview image to the hardware abstraction layer;
the hardware abstraction layer receives the second preview image sent by the third-party camera application, calls an image processing algorithm to process the second preview image, obtains the processed target image, and sends the target image to the third-party camera application, wherein the image processing algorithm is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module, a first definition of the target area in the target image is larger than a second definition, and the second definition is the definition of an area except the target area in the target image.
Similarly, the camera hardware abstraction module in the hardware abstraction layer may call an image processing algorithm in the algorithm module through the media policy module, where the image processing algorithm is stored in the algorithm management module, and the image processing algorithm may be, for example, noise reduction processing, area blurring processing, and the like, which is not limited herein, for example, area blurring processing is performed, and areas outside the target area are all subjected to blurring processing to highlight the target area.
As can be seen, in this example, after the third-party camera is applied to adjust the camera parameters in the aspect of hardware according to the target area to obtain the second preview image, the algorithm management module performs software algorithm processing on the second preview image, so that the optimization of the target area is further improved without slowing down the processing speed of the system.
In one possible example, the application layer of the operating system is provided with a media management module, the third-party camera application is in communication connection with the media management module, the media management module is in communication connection with the media service module, and before the third-party camera application sends the first preview image and the first data request to the hardware abstraction layer of the operating system, the method further includes:
the media service module receives a version information acquisition request carrying an authentication code sent by the third-party camera application through the media management module;
the media service module authenticates the third party camera application according to the authentication code;
and when the authentication is successful, the media service module feeds back the version information to the third-party camera application through the media management module.
The third-party camera application is in communication connection with the media management module, and the media management module is also in communication connection with the media service module, so that the third-party camera application and the media service module can communicate through the media management module.
For example, the media service module receives a version information acquisition request carrying an authorization code from the third-party camera application through the media management module, verifies the authorization code, and returns the version information to the third-party camera application if the verification is successful, for example, a character string such as "1.1: 1.2" may be returned; and if the media service module verifies that the authorization code does not pass, returning a null character string to the third-party camera application through the media management module.
The authentication code is a cipher text encrypted by RSA, the RSA algorithm is one of asymmetric encryption algorithms, and the key of the RSA algorithm is longer and the security is higher.
As can be seen, in this example, the media service module opens the bottom layer capability to the third-party camera application after passing the authentication, and does not directly use the bottom layer capability, so that the security can be effectively controlled, and the secure bottom layer function opening is facilitated.
In one possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module, and an algorithm management module, the media policy module is in communication connection with the camera hardware abstraction module and the algorithm management module, and before the calling the point of interest identification algorithm to process the first preview image, the method further includes:
the media management module receives a first identifier corresponding to the interest point identification algorithm sent by the third-party camera application;
the media management module sets the first identifier to the media service module;
the media strategy module receives the first identifier sent by the media service module, converts the first identifier into a second identifier and sends the second identifier to the camera hardware abstraction module;
the invoking of the point of interest recognition algorithm to process the first preview image includes:
and the camera hardware abstraction module calls the interest point identification algorithm in the algorithm management module to process the first preview image through the media strategy module according to the second identifier.
The first identifier is an identifier corresponding to an interest point identification algorithm in a function list of the third-party camera application, which collects the version information, of the media service module, and all functions supported by the third-party camera application can be listed in the function list, that is, all functions accessed by the third-party camera application can be opened.
The first identifier is an identifier that can be identified by the media service module, and may be json string { "EIS": 1}, for example.
The second identifier corresponds to the first identifier, and the second identifier is an identifier that can be recognized by the camera hardware abstraction module, and may be, for example, { "Key": value }, where value may be 0, 1, and the like, which is not limited herein.
Also, in the above example, a specific implementation manner in which the hardware abstraction layer invokes an image processing algorithm to process the second preview image is as shown in this embodiment, where the image processing algorithm may correspond to a third identifier, the media policy module receives the third identifier sent by the media service module, converts the third identifier into a fourth identifier, and sends the fourth identifier to the camera hardware abstraction module, and the camera hardware abstraction module invokes the image processing algorithm in the algorithm management module to process the second preview image through the media policy module according to the fourth identifier.
Therefore, in this example, the media policy module may convert the first identifier that can be recognized by the media service module into the second identifier that can be recognized by the camera hardware abstraction module, so as to implement the call of the third-party camera application to the underlying algorithm, which is beneficial to improving the convenience of algorithm call.
In this possible example, the adjusting the camera parameter according to the target area information to obtain the second preview image includes:
the third party camera application determining current first scene information;
the third-party camera application determines a first camera parameter adjustment strategy according to the first scene information, the first preview image and the target area information, and sends the first camera parameter adjustment strategy to a camera hardware abstraction module in a hardware abstraction layer;
the camera hardware abstraction module adjusts the camera hardware parameters corresponding to the hardware layer through the driving layer according to the first camera parameter adjustment strategy;
and the third-party camera application acquires the second preview image according to the adjusted camera hardware parameters.
The first scene information may be, for example, a front camera self-timer scene, a rear camera scene, or a night scene, a backlight scene, etc., which is not limited herein.
In a specific implementation, the third-party camera application may determine the first camera parameter adjustment policy according to the first scene information, the first preview image, and the target area information, and the first camera parameter adjustment policy may be various, for example, different camera parameter adjustments may be performed according to whether the current first scene is a night scene or a backlight scene, for example, parameter adjustments such as AE and AWB, or the camera parameter adjustment may be performed according to a size of the target area in the target area information, for example, when the target area is small, zoom-in adjustment may be performed, when the target area is large, or when a corresponding object in the target area is large, zoom-out adjustment may be performed, and the like, which is not limited herein.
And the second preview image is a preview image acquired by the third-party camera application after the camera parameters are adjusted.
As can be seen, in this example, the third-party camera application determines the first camera parameter adjustment policy according to the first scene information and the target area, which is beneficial to improving the accuracy of camera parameter adjustment.
In one possible example, when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application, the method further includes, after invoking a point of interest recognition algorithm to process the first preview image and obtain a target area:
an algorithm management module determines the image type in the target area;
the algorithm management module determines a second camera parameter adjustment strategy according to the image type;
and the algorithm management module sends the second camera parameter adjustment strategy to the third-party camera application, wherein the second camera parameter adjustment strategy is used for the third-party camera application to adjust the camera parameters so as to acquire the target image.
The image type may be, for example, a person type, a landscape type, or the like, and is not limited herein.
In a specific implementation, the algorithm management module may determine the second camera parameter adjustment policy according to the image type in various specific implementations, for example, when the image type is a human type, determine the adjustment policy a to enlarge a central human, and when the image type is a landscape type, determine the adjustment policy B, for example, when the landscape is a mountain, perform reduction processing to incorporate all the landscape into the first preview image, and the like, which is not limited herein.
After the algorithm management module sends the second camera parameter adjustment strategy to the third-party camera application, the third-party camera application may adjust camera parameters directly according to the second camera parameter adjustment strategy, or the third-party camera application may determine by comparing the first camera parameter adjustment strategy and the second camera parameter adjustment strategy, adjust the camera parameters with the second camera parameter adjustment strategy when the first camera parameter adjustment strategy is the same as the second camera parameter adjustment strategy, determine a third camera parameter adjustment strategy according to the first camera parameter adjustment strategy and the second camera parameter adjustment strategy when the first camera parameter adjustment strategy is different from the second camera parameter adjustment strategy, adjust the camera parameters with the third camera parameter adjustment strategy, and do not limit the adjustment strategy, wherein a specific implementation party of the third camera parameter adjustment strategy is determined according to the first camera parameter adjustment strategy and the second camera parameter adjustment strategy The formula may be various, for example, an average value of each parameter in the two camera parameter adjustment strategies may be taken, for example, the focal length adjustment amount in the first camera parameter adjustment strategy is a, the focal length adjustment amount in the second camera parameter adjustment strategy is B, and the focal length adjustment amount in the third camera parameter adjustment strategy is an average value of a and B, or an adjustment value with a smaller adjustment amount of each parameter in the two camera parameter adjustment strategies may be taken, for example, the focal length adjustment amount in the first camera parameter adjustment strategy is a, the focal length adjustment amount in the second camera parameter adjustment strategy is B, and a is greater than B, and the focal length adjustment amount in the third camera parameter adjustment strategy is B, which is not limited herein.
As can be seen, in this example, after the target area in which the user is interested is determined, the algorithm management module determines the second camera parameter adjustment policy according to the image type in the target area and sends the second camera parameter adjustment policy to the third-party camera application, and the third-party camera application can adjust the camera parameters according to the second camera parameter adjustment policy, so that the processing pressure of the third-party camera application is reduced, the third-party camera application is prevented from occupying higher system resources, and the system is prevented from being stuck.
Referring to fig. 3, fig. 3 is a flowchart illustrating another image capturing method according to an embodiment of the present disclosure, where the image capturing method can be applied to the electronic device shown in fig. 1.
As shown in the figure, the image capturing method includes the following operations:
s301, the media service module receives a version information acquisition request carrying an authentication code sent by a third-party camera application through the media management module.
S302, the media service module authenticates the third-party camera application according to the authentication code.
S303, when the authentication is successful, the media service module feeds back the version information to the third-party camera application through the media management module.
S304, the media management module receives a first identifier corresponding to the interest point identification algorithm sent by the third-party camera application.
S305, the media management module sets the first identifier to the media service module.
S306, the media strategy module receives the first identifier sent by the media service module, converts the first identifier into a second identifier and sends the second identifier to the camera hardware abstraction module.
S307, the third party camera application sends a first preview image and a first data request to the camera hardware abstraction module.
S308, when the camera hardware abstraction module receives the first preview image and the first data request sent by the third-party camera application, the first preview image is processed through the interest point recognition algorithm in the media strategy module according to the second identification, a target area is obtained, and the target area information is sent to the third-party camera application.
S309, the third party camera application receives the target area information sent by the hardware abstraction layer, and adjusts camera parameters according to the target area information to obtain a target image.
It can be seen that, in the embodiment of the present application, a third party camera application in an electronic device sends a first preview image and a first data request to a hardware abstraction layer of an operating system, and then, when receiving the first preview image and the first data request sent by the third party camera application, the hardware abstraction layer invokes a point of interest recognition algorithm to process the first preview image to obtain a target area and sends target area information to the third party camera application, where the point of interest recognition algorithm is that the third party camera application requests an operating system to be open for the third party camera application through a media service module, and then, the third party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera parameters according to the target area information, to acquire a target image. Therefore, the third-party camera application can call the underlying interest point identification algorithm through the media service module to perform algorithm processing on the first preview image acquired by the third-party camera application, the processing speed of the third-party camera application slowing down the system is avoided, meanwhile, the image processing speed is improved, in addition, the image shooting effect is enhanced by using the interest point identification algorithm, and the user experience is improved.
In addition, the media strategy module can convert a first identifier which can be identified by the media service module into a second identifier which can be identified by the camera hardware abstraction module, so that the calling of the bottom layer algorithm by the third-party camera application is realized, and the convenience of calling the algorithm is improved.
In addition, the media service module opens the bottom layer capability to the third-party camera after authentication, and the bottom layer capability is not directly used, so that the security can be effectively controlled, and the safe bottom layer function opening is facilitated.
Consistent with the embodiments shown in fig. 2 and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an electronic device 400 provided in an embodiment of the present application, where the electronic device 400 includes an operating system and a media service module, an application layer of the operating system is provided with a third-party camera application, and as shown in the figure, the electronic device 400 further includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for performing any step in the foregoing method embodiments.
In one possible example, the program 421 includes instructions for performing the following steps: the third party camera application sending a first preview image and a first data request to a hardware abstraction layer of the operating system; when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application, calling an interest point identification algorithm to process the first preview image to obtain a target area, and sending target area information to the third-party camera application, wherein the interest point identification algorithm is opened by the third-party camera application through the media service module to request an operating system to the third-party camera application; and the third-party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera parameters according to the target area information to acquire a target image.
In one possible example, in terms of the third-party camera application receiving the target area information sent from the hardware abstraction layer and adjusting camera parameters according to the target area information to obtain a target image, the instructions in the program 421 are specifically configured to: the third-party camera application receives the target area information sent by the hardware abstraction layer, adjusts camera parameters according to the target area information and obtains a second preview image; the third party camera application sending the second preview image to the hardware abstraction layer; the hardware abstraction layer receives the second preview image sent by the third-party camera application, calls an image processing algorithm to process the second preview image, obtains the processed target image, and sends the target image to the third-party camera application, wherein the image processing algorithm is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module, a first definition of the target area in the target image is larger than a second definition, and the second definition is the definition of an area except the target area in the target image.
In one possible example, the application layer of the operating system is provided with a media management module, the third party camera application is communicatively connected to the media management module, the media management module is communicatively connected to the media service module, and the program 421 further includes instructions for: before the third-party camera application sends a first preview image and a first data request to a hardware abstraction layer of the operating system, the media service module receives a version information acquisition request carrying an authentication code sent by the third-party camera application through the media management module; the media service module authenticates the third party camera application according to the authentication code; and when the authentication is successful, the media service module feeds back the version information to the third-party camera application through the media management module.
In one possible example, the hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, the media policy module is communicatively connected to the camera hardware abstraction module and the algorithm management module, and the program 421 further includes instructions for: before the first preview image is processed by calling the interest point identification algorithm, a media management module receives a first identifier corresponding to the interest point identification algorithm from the third-party camera application; the media management module sets the first identifier to the media service module; the media strategy module receives the first identifier sent by the media service module, converts the first identifier into a second identifier and sends the second identifier to the camera hardware abstraction module;
in respect of processing the first preview image by invoking the point of interest recognition algorithm, the instructions in the program 421 are specifically configured to perform the following operations: and the camera hardware abstraction module calls the interest point identification algorithm in the algorithm management module to process the first preview image through the media strategy module according to the second identifier.
In one possible example, in terms of the adjusting the camera parameter according to the target area information and acquiring the second preview image, the instructions in the program 421 are specifically configured to perform the following operations: the third party camera application determining current first scene information; the third-party camera application determines a first camera parameter adjustment strategy according to the first scene information, the first preview image and the target area information, and sends the first camera parameter adjustment strategy to a camera hardware abstraction module in a hardware abstraction layer; the camera hardware abstraction module adjusts the camera hardware parameters corresponding to the hardware layer through the driving layer according to the first camera parameter adjustment strategy; and the third-party camera application acquires the second preview image according to the adjusted camera hardware parameters.
In one possible example, the program 421 also includes instructions for: when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application, calling an interest point identification algorithm to process the first preview image to obtain a target area, and then determining an image type in the target area by an algorithm management module; the algorithm management module determines a second camera parameter adjustment strategy according to the image type; and the algorithm management module sends the second camera parameter adjustment strategy to the third-party camera application, wherein the second camera parameter adjustment strategy is used for the third-party camera application to adjust the camera parameters so as to acquire the target image.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 is a block diagram of functional units of the image capturing apparatus 500 according to the embodiment of the present application. The image capturing apparatus 500 is applied to an electronic device, the electronic device includes an operating system and a media service module, an application layer of the operating system is provided with a third-party camera application, and the image capturing apparatus includes a processing unit 501 and a communication unit 502, where the processing unit 501 is configured to execute any one of the steps in the above method embodiments, and when data transmission such as sending is performed, the communication unit 502 is optionally invoked to complete a corresponding operation. The details will be described below.
The processing unit 501, configured to send a first preview image and a first data request to a hardware abstraction layer of the operating system through the communication unit 502 by the third-party camera application; when the communication unit 502 receives the first preview image and the first data request sent by the third-party camera application, the hardware abstraction layer calls an interest point identification algorithm to process the first preview image to obtain a target area, and sends the target area information to the third-party camera application through the communication unit 502, wherein the interest point identification algorithm is opened by the third-party camera application through the media service module requesting an operating system for the third-party camera application; the third party camera application receives the target area information sent by the hardware abstraction layer through the communication unit 502, and adjusts camera parameters according to the target area information to obtain a target image.
It can be seen that, in the embodiment of the present application, a third party camera application in an electronic device sends a first preview image and a first data request to a hardware abstraction layer of an operating system, and then, when receiving the first preview image and the first data request sent by the third party camera application, the hardware abstraction layer invokes a point of interest recognition algorithm to process the first preview image to obtain a target area and sends target area information to the third party camera application, where the point of interest recognition algorithm is that the third party camera application requests an operating system to be open for the third party camera application through a media service module, and then, the third party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera parameters according to the target area information, to acquire a target image. Therefore, the third-party camera application can call the underlying interest point identification algorithm through the media service module to perform algorithm processing on the first preview image acquired by the third-party camera application, the processing speed of the third-party camera application slowing down the system is avoided, meanwhile, the image processing speed is improved, in addition, the image shooting effect is enhanced by using the interest point identification algorithm, and the user experience is improved.
In one possible example, in terms that the third-party camera application receives the target area information sent by the hardware abstraction layer through the communication unit 502, and adjusts camera parameters according to the target area information to acquire a target image, the processing unit 501 is specifically configured to: the third-party camera application receives the target area information sent by the hardware abstraction layer through the communication unit 502, adjusts camera parameters according to the target area information, and obtains a second preview image; the third party camera application sends the second preview image to the hardware abstraction layer through the communication unit 502; the hardware abstraction layer receives the second preview image sent by the third-party camera application through the communication unit 502, calls an image processing algorithm to process the second preview image, obtains the processed target image, and sends the target image to the third-party camera application through the communication unit 502, wherein the image processing algorithm is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module, a first definition of the target area in the target image is greater than a second definition, and the second definition is the definition of an area except the target area in the target image.
In one possible example, an application layer of the operating system is provided with a media management module, the third-party camera application is communicatively connected to the media management module, the media management module is communicatively connected to the media service module, and the processing unit 501, before the third-party camera application sends the first preview image and the first data request to a hardware abstraction layer of the operating system, is further configured to: the media service module receives a version information acquisition request carrying an authentication code sent by the third-party camera application through the media management module and the communication unit 502; the media service module authenticates the third party camera application according to the authentication code; when the authentication is successful, the media service module feeds back the version information to the third party camera application through the media management module and the communication unit 502.
In a possible example, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, the media policy module is communicatively connected to the camera hardware abstraction module and the algorithm management module, and the processing unit 501, before the invoking of the point of interest recognition algorithm to process the first preview image, is further configured to: the media management module receives a first identifier corresponding to the point of interest recognition algorithm sent by the third-party camera application through the communication unit 502; the media management module sets the first identifier to the media service module; the media policy module receives the first identifier sent by the media service module through the communication unit 502, converts the first identifier into a second identifier, and sends the second identifier to the camera hardware abstraction module through the communication unit 502;
in respect to processing the first preview image by invoking the interest point identification algorithm, the processing unit 501 is specifically configured to: and the camera hardware abstraction module calls the interest point identification algorithm in the algorithm management module to process the first preview image through the media strategy module according to the second identifier.
In a possible example, in terms of the adjusting the camera parameter according to the target area information and acquiring the second preview image, the processing unit 501 is specifically configured to: the third party camera application determining current first scene information; the third-party camera application determines a first camera parameter adjustment strategy according to the first scene information, the first preview image, and the target area information, and sends the first camera parameter adjustment strategy to a camera hardware abstraction module in a hardware abstraction layer through the communication unit 502; the camera hardware abstraction module adjusts the camera hardware parameters corresponding to the hardware layer through the driving layer according to the first camera parameter adjustment strategy; and the third-party camera application acquires the second preview image according to the adjusted camera hardware parameters.
In one possible example, after the processing unit 501 calls an interest point identification algorithm to process the first preview image to obtain a target area when the hardware abstraction layer receives the first preview image and the first data request from the third-party camera application through the communication unit 502, the processing unit is further configured to: an algorithm management module determines the image type in the target area; the algorithm management module determines a second camera parameter adjustment strategy according to the image type; the algorithm management module sends the second camera parameter adjustment policy to the third-party camera application through the communication unit 502, where the second camera parameter adjustment policy is used by the third-party camera application to adjust the camera parameters, so as to obtain the target image.
The image capturing apparatus 500 may further include a storage unit 503 for storing program codes and data of the electronic device. The processing unit 501 may be a processor, the communication unit 502 may be a touch display screen or a transceiver, and the storage unit 503 may be a memory.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application further provide a chip, where the chip includes a processor, configured to call and run a computer program from a memory, so that a device in which the chip is installed performs some or all of the steps described in the electronic device in the above method embodiments.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned memory comprises: various media capable of storing program codes, such as a usb disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk.
Those skilled in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be performed by associated hardware as instructed by a program, which may be stored in a computer-readable memory, which may include: flash disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (9)

1. An image shooting method is applied to electronic equipment, the electronic equipment comprises an operating system and a media service module, a third-party camera application is arranged on an application layer of the operating system, a camera hardware abstraction module, a media policy module and an algorithm management module are arranged on a hardware abstraction layer of the operating system, and the media policy module is in communication connection with the camera hardware abstraction module and the algorithm management module, and the method comprises the following steps:
the third party camera application sending a first preview image and a first data request to a hardware abstraction layer of the operating system;
when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application, calling an interest point identification algorithm to process the first preview image to obtain a target area, and sending target area information to the third-party camera application, wherein the interest point identification algorithm is opened by the third-party camera application through the media service module to request an operating system to the third-party camera application;
the third-party camera application receives the target area information sent by the hardware abstraction layer and adjusts camera hardware parameters according to the target area information to obtain a target image;
before the calling the interest point identification algorithm to process the first preview image, the method further comprises: the media management module receives a first identifier corresponding to the interest point identification algorithm sent by the third-party camera application; the media management module sets the first identifier to the media service module; the media strategy module receives the first identifier sent by the media service module, converts the first identifier into a second identifier and sends the second identifier to the camera hardware abstraction module;
the invoking of the point of interest recognition algorithm to process the first preview image includes: and the camera hardware abstraction module calls the interest point identification algorithm in the algorithm management module to process the first preview image through the media strategy module according to the second identifier.
2. The method of claim 1, wherein the third party camera application receives the target area information sent from the hardware abstraction layer and adjusts camera hardware parameters according to the target area information to obtain a target image, comprising:
the third-party camera application receives the target area information sent by the hardware abstraction layer, adjusts camera hardware parameters according to the target area information and obtains a second preview image;
the third party camera application sending the second preview image to the hardware abstraction layer;
the hardware abstraction layer receives the second preview image sent by the third-party camera application, calls an image processing algorithm to process the second preview image, obtains the processed target image, and sends the target image to the third-party camera application, wherein the image processing algorithm is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module, a first definition of the target area in the target image is larger than a second definition, and the second definition is the definition of an area except the target area in the target image.
3. The method of claim 1 or 2, wherein a media management module is provided at an application layer of the operating system, wherein the third-party camera application is communicatively coupled to the media management module, wherein the media management module is communicatively coupled to the media service module, and wherein before the third-party camera application sends the first preview image and the first data request to a hardware abstraction layer of the operating system, the method further comprises:
the media service module receives a version information acquisition request carrying an authentication code sent by the third-party camera application through the media management module;
the media service module authenticates the third party camera application according to the authentication code;
and when the authentication is successful, the media service module feeds back the version information to the third-party camera application through the media management module.
4. The method of claim 2, wherein the adjusting the camera hardware parameters according to the target area information to obtain the second preview image comprises:
the third party camera application determining current first scene information;
the third-party camera application determines a first camera parameter adjustment strategy according to the first scene information, the first preview image and the target area information, and sends the first camera parameter adjustment strategy to a camera hardware abstraction module in a hardware abstraction layer;
the camera hardware abstraction module adjusts the camera hardware parameters corresponding to the hardware layer through the driving layer according to the first camera parameter adjustment strategy;
and the third-party camera application acquires the second preview image according to the adjusted camera hardware parameters.
5. The method according to any one of claims 1-3, wherein the hardware abstraction layer, upon receiving the first preview image and the first data request sent by the third-party camera application, invokes a point of interest recognition algorithm to process the first preview image, and after obtaining a target area, the method further comprises:
an algorithm management module determines the image type in the target area;
the algorithm management module determines a second camera parameter adjustment strategy according to the image type;
and the algorithm management module sends the second camera parameter adjustment strategy to the third-party camera application, wherein the second camera parameter adjustment strategy is used for the third-party camera application to adjust the camera parameters so as to acquire the target image.
6. An image shooting device is applied to an electronic device, the electronic device includes an operating system and a media service module, an application layer of the operating system is provided with a third-party camera application, a hardware abstraction layer of the operating system is provided with a camera hardware abstraction module, a media policy module and an algorithm management module, the media policy module is in communication connection with the camera hardware abstraction module and the algorithm management module, the device includes a processing unit and a communication unit, wherein:
the processing unit is used for sending a first preview image and a first data request to a hardware abstraction layer of the operating system by the third-party camera application through the communication unit; when the hardware abstraction layer receives the first preview image and the first data request sent by the third-party camera application through the communication unit, calling an interest point identification algorithm to process the first preview image to obtain a target area, and sending target area information to the third-party camera application through the communication unit, wherein the interest point identification algorithm is that the third-party camera application requests an operating system to be open for the third-party camera application through the media service module; the third-party camera application receives the target area information sent by the hardware abstraction layer through the communication unit and adjusts camera hardware parameters according to the target area information to obtain a target image;
the processing unit, prior to processing the first preview image by invoking the point of interest recognition algorithm, is further configured to: the media management module receives a first identifier corresponding to the interest point identification algorithm sent by the third-party camera application; the media management module sets the first identifier to the media service module; the media strategy module receives the first identifier sent by the media service module, converts the first identifier into a second identifier and sends the second identifier to the camera hardware abstraction module;
in an aspect that the calling of the interest point identification algorithm processes the first preview image, the processing unit is specifically configured to: and the camera hardware abstraction module calls the interest point identification algorithm in the algorithm management module to process the first preview image through the media strategy module according to the second identifier.
7. A chip, comprising: a processor for calling and running a computer program from a memory so that a device on which the chip is installed performs the method of any one of claims 1-5.
8. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5.
9. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-5.
CN201911253936.4A 2019-12-09 2019-12-09 Image shooting method and related device Active CN110995994B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911253936.4A CN110995994B (en) 2019-12-09 2019-12-09 Image shooting method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911253936.4A CN110995994B (en) 2019-12-09 2019-12-09 Image shooting method and related device

Publications (2)

Publication Number Publication Date
CN110995994A CN110995994A (en) 2020-04-10
CN110995994B true CN110995994B (en) 2021-09-14

Family

ID=70091481

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911253936.4A Active CN110995994B (en) 2019-12-09 2019-12-09 Image shooting method and related device

Country Status (1)

Country Link
CN (1) CN110995994B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112199127A (en) * 2020-10-10 2021-01-08 Oppo(重庆)智能科技有限公司 Image data processing method and device, mobile terminal and storage medium
WO2022082361A1 (en) * 2020-10-19 2022-04-28 深圳市锐明技术股份有限公司 Image information processing method and terminal device
CN112399087B (en) * 2020-12-07 2022-05-20 Oppo(重庆)智能科技有限公司 Image processing method, image processing apparatus, image capturing apparatus, electronic device, and storage medium
CN114745495B (en) * 2021-01-07 2023-06-23 北京小米移动软件有限公司 Image generation method, device and storage medium
CN112925572B (en) * 2021-03-01 2023-05-23 联想(北京)有限公司 Control method and device and electronic equipment
CN113179369B (en) * 2021-04-08 2023-03-21 重庆传音通讯技术有限公司 Shot picture display method, mobile terminal and storage medium
CN115982708A (en) * 2021-10-15 2023-04-18 Oppo广东移动通信有限公司 Image processing method, device, equipment and storage medium
CN116361022A (en) * 2021-12-28 2023-06-30 北京小米移动软件有限公司 Image processing method, device, electronic equipment and storage medium
CN116366914A (en) * 2021-12-28 2023-06-30 北京小米移动软件有限公司 Video data processing method, electronic device and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657957A (en) * 2015-03-16 2015-05-27 龙旗电子(惠州)有限公司 Method for realizing picture processing capability of intelligent cell phone
CN104657956A (en) * 2015-03-16 2015-05-27 龙旗电子(惠州)有限公司 Method for realizing smart phone picture beautifying function
CN105072339A (en) * 2015-08-03 2015-11-18 广东欧珀移动通信有限公司 Method, mobile terminal and system for regulating and controlling focusing of camera rapidly through selfie stick
CN107124431A (en) * 2017-06-22 2017-09-01 浙江数链科技有限公司 Method for authenticating, device, computer-readable recording medium and right discriminating system
CN107465869A (en) * 2017-07-27 2017-12-12 努比亚技术有限公司 A kind of focus adjustment method and terminal
CN108495050A (en) * 2018-06-15 2018-09-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102349428B1 (en) * 2015-08-12 2022-01-10 삼성전자주식회사 Method for processing image and electronic device supporting the same
CN109101352B (en) * 2018-08-30 2021-08-06 Oppo广东移动通信有限公司 Image processing algorithm architecture, algorithm calling method, device, storage medium and mobile terminal
CN110086967B (en) * 2019-04-10 2021-02-05 Oppo广东移动通信有限公司 Image processing method, image processor, photographing device and electronic equipment
CN110177215A (en) * 2019-06-28 2019-08-27 Oppo广东移动通信有限公司 Image processing method, image processor, filming apparatus and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104657957A (en) * 2015-03-16 2015-05-27 龙旗电子(惠州)有限公司 Method for realizing picture processing capability of intelligent cell phone
CN104657956A (en) * 2015-03-16 2015-05-27 龙旗电子(惠州)有限公司 Method for realizing smart phone picture beautifying function
CN105072339A (en) * 2015-08-03 2015-11-18 广东欧珀移动通信有限公司 Method, mobile terminal and system for regulating and controlling focusing of camera rapidly through selfie stick
CN107124431A (en) * 2017-06-22 2017-09-01 浙江数链科技有限公司 Method for authenticating, device, computer-readable recording medium and right discriminating system
CN107465869A (en) * 2017-07-27 2017-12-12 努比亚技术有限公司 A kind of focus adjustment method and terminal
CN108495050A (en) * 2018-06-15 2018-09-04 Oppo广东移动通信有限公司 Photographic method, device, terminal and computer readable storage medium

Also Published As

Publication number Publication date
CN110995994A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110995994B (en) Image shooting method and related device
JP7338044B2 (en) Face image transmission method, value transfer method, device and electronic device
WO2021115038A1 (en) Application data processing method and related apparatus
CN110958399B (en) High dynamic range image HDR realization method and related product
CN111147749A (en) Photographing method, photographing device, terminal and storage medium
US20190108409A1 (en) Face recognition method and related product
JP2021518956A (en) Image processing methods and devices, electronic devices and computer-readable storage media
CN110958390B (en) Image processing method and related device
CN110991368A (en) Camera scene recognition method and related device
CN110955541B (en) Data processing method, device, chip, electronic equipment and readable storage medium
CN110971830B (en) Anti-shake method for video shooting and related device
CN113923461B (en) Screen recording method and screen recording system
WO2019047708A1 (en) Resource configuration method and related product
CN110991369A (en) Image data processing method and related device
CN107657219B (en) Face detection method and related product
CN110933314B (en) Focus-following shooting method and related product
CN110990088B (en) Data processing method and related equipment
CN115344885B (en) Display method, device and terminal
WO2017157435A1 (en) A method and system for visual privacy protection for mobile and wearable devices
CN110941344B (en) Method for obtaining gazing point data and related device
EP3001342A1 (en) Methods and systems for displaying biometric data during capture
CN110177332B (en) Data transmission method and device
CN110996089B (en) Volume calculation method and related device
KR20160019184A (en) Mobile terminal and method for controlling the mobile terminal
WO2022242343A1 (en) Cross-device text continuity method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant