CN110942047A - Application optimization method and related product - Google Patents

Application optimization method and related product Download PDF

Info

Publication number
CN110942047A
CN110942047A CN201911252510.7A CN201911252510A CN110942047A CN 110942047 A CN110942047 A CN 110942047A CN 201911252510 A CN201911252510 A CN 201911252510A CN 110942047 A CN110942047 A CN 110942047A
Authority
CN
China
Prior art keywords
image data
module
data
party application
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911252510.7A
Other languages
Chinese (zh)
Other versions
CN110942047B (en
Inventor
马亚辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201911252510.7A priority Critical patent/CN110942047B/en
Publication of CN110942047A publication Critical patent/CN110942047A/en
Application granted granted Critical
Publication of CN110942047B publication Critical patent/CN110942047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/96Management of image or video recognition tasks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application provides an application optimization method and a related product, which are applied to electronic equipment, wherein the method comprises the following steps: the third party application sends an image preview request to the hardware abstraction module; the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data; the media policy module sending the final image data to the third party application; and the third-party application carries out object identification processing according to the final image data. Therefore, the quality of the data frame reported to the third-party application is guaranteed, and the identification accuracy of the third-party application is improved.

Description

Application optimization method and related product
Technical Field
The present application relates to the field of image processing, and in particular, to an application optimization method and related products.
Background
Object recognition, which is the analysis of objects in a picture or video using a machine, is one of the classic problems in computer vision, and its task is to mark the position of objects in an image with a recognition window, to mark which objects are, to recognize the names of objects and to give the categories of objects. From the traditional framework of artificially designing features and shallow classifiers to the deep learning-based end-to-end recognition framework, object recognition becomes more mature step by step. When deployed on a mobile device, as a third-party Application, some standard camera APIs (Application Programming interfaces) can only be accessed through the system, and then identification is performed according to data received from the APIs.
Disclosure of Invention
The embodiment of the application optimization processing method and the related product can process image data and perform object recognition according to the processed image.
In a first aspect, an embodiment of the present application provides an application optimization method, which is applied to an electronic device, where the electronic device includes a media service module and an operating system, an application layer of the operating system is provided with a third-party application, and a hardware abstraction layer of the operating system is provided with a hardware abstraction module and a media policy module; the method comprises the following steps:
the third party application sends an image preview request to the hardware abstraction module;
the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module;
the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an enhanced function algorithm module which is selected by the third-party application through the media service module and requests the operating system to be open to the application;
the media policy module sending the final image data to the third party application;
and the third-party application carries out object identification processing according to the final image data.
In a second aspect, an embodiment of the present application provides an application optimization apparatus, which is applied to an electronic device, where the electronic device includes a media service module and an operating system, an application layer of the operating system is provided with a third-party application, a hardware abstraction layer of the operating system is provided with a hardware abstraction module and a media policy module, the apparatus includes a processing unit and a communication unit, where,
the processing unit is used for sending an image preview request to the hardware abstraction module through the communication unit according to the third-party application; the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an algorithm module with enhanced functions, selected by the third-party application through the media service module and requested to be opened by the operating system to the application; and the media policy module sending the final image data to the third party application; and the third-party application performs object recognition processing according to the final image data.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing steps in any method of the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a chip, including: and the processor is used for calling and running the computer program from the memory so that the device provided with the chip executes part or all of the steps described in any method of the first aspect of the embodiment of the application.
In a fifth aspect, this application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in any one of the methods of the first aspect of this application.
In a sixth aspect, the present application provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to cause a computer to perform some or all of the steps as described in any one of the methods of the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in this embodiment of the application, first, the third-party application of the electronic device sends an image preview request to the hardware abstraction module; then the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; then the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an algorithm module with enhanced functions, selected by the third-party application through the media service module and requested to be opened by the operating system to the application; the media policy module then sends the final image data to the third party application; and finally, the third-party application carries out object identification processing according to the final image data. Therefore, the embodiment of the application can utilize the media platform OMedia to take more image data information which cannot be obtained by the third-party application and filter the image data information, so that the quality of the data frame reported to the third-party application is ensured, and the identification accuracy of the third-party application is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 2-1 is a schematic flow chart diagram illustrating an application optimization method provided in an embodiment of the present application;
2-2 is a schematic flow chart of filtering target data provided by the embodiment of the application;
FIG. 3 is a schematic flow chart diagram illustrating another application optimization method provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device provided in an embodiment of the present application;
fig. 5 is a block diagram illustrating functional units of an application optimization apparatus according to an embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The electronic device according to the embodiments of the present application may be an electronic device with communication capability, and the electronic device may include various handheld devices with wireless communication function, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and so on.
At present, when a real-time object recognition application is deployed on a mobile device, a real-time picture taken from a camera preview has a great influence on object recognition, when the taken picture is fuzzy, a result given by a model is influenced, the application can be optimized only based on the available data, and the input picture cannot be optimized by filtering based on Acc and/or Gyro data.
In view of the foregoing problems, embodiments of the present application provide an application optimization method and related products, which are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, fig. 1 is a schematic structural diagram of an electronic device provided by an embodiment of the present application, an electronic device 100 according to an embodiment of the present application includes a media service module and an operating system (e.g., an android operating system, which is not limited herein uniquely), an application layer of the operating system is provided with a third party application and a media management module (also referred to as a media interface module), a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module, and an algorithm management module, and further, an operating system native architecture includes a framework layer and a driver layer, the framework layer includes application interfaces of various native applications (e.g., a native camera application program interface), application services (e.g., a native camera service), and a framework layer interface (e.g., Google HAL3interface), the hardware abstraction layer includes a hardware abstraction layer interface (e.g., HAL3.0), and hardware abstraction modules of various native applications (e.g., a camera hardware abstraction, the driver layer includes various drivers (e.g., screen Display driver, Audio driver, etc.) for enabling various hardware of the electronic device, such as the image signal processor ISP + front-end image sensors, etc.
The media service module is independent of the operating system, third-party applications can communicate with the media service module through the media management module, the media service module can communicate with the media policy module through an android native information link formed by an application interface, an application service, a frame layer interface, a hardware abstraction layer interface and the hardware abstraction module, the media policy module communicates with the algorithm management module, the algorithm management module maintains an android native algorithm library, the algorithm management module comprises an algorithm module and can call algorithms in the algorithm library for data processing, the algorithm library comprises enhancement functions supported by various native applications, and for example, for a native camera application, the native camera application is supported to realize various enhancement functions such as binocular shooting, beauty, sharpening, night vision and the like. In addition, the media service module can also directly communicate with the media policy module or the algorithm management module.
Based on the above framework, the media service module may enable the algorithm module in the algorithm library through the android native information link, the media policy module, and the algorithm management module, or enable the algorithm module in the algorithm library directly through the media policy module and the algorithm management module, or enable the algorithm module in the algorithm library directly through the algorithm management module, thereby implementing an enhanced function of opening native application association for third-party applications.
Based on the above framework, the media service module may invoke the driver of the application to enable some hardware through an android native information link, or through a first information link composed of the media policy module and the hardware abstraction module, or through a second information link composed of the media policy module, the algorithm management module, and the hardware abstraction module, thereby implementing opening native application-related hardware for a third party application.
Referring to fig. 2-1, fig. 2-1 is a schematic flowchart of an application optimization method according to an embodiment of the present application, where the application optimization method is applied to the electronic device shown in fig. 1, and as shown in the figure, the application optimization method includes the following steps.
Step 201, the third party application sends an image preview request to the hardware abstraction module.
The third-party application is in communication connection with the hardware abstraction Camera HAL module, when the third-party application needs to identify an object, an image preview request is sent to the Camera HAL module, the Camera HAL module analyzes the request after receiving the request, converts the image preview request into information which can be identified at a bottom layer, and can also determine whether the image preview request is sent by the third-party application.
Step 202, the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media policy module.
The Camera HAL module may call a bottom driver located in a hardware layer, and after analyzing the received image preview request, the Camera HAL module sends the analyzed image preview request to the bottom driver.
Step 203, the media policy module receives the initial image data, and invokes a pre-enabled algorithm module to process the initial image data to obtain final image data, where the algorithm module is an algorithm module with enhanced functions that the third-party application selects and requests the operating system to open to the application through the media service module.
After receiving an image preview request of a third-party application from the Camera HAL module, the bottom driver can acquire related initial image data from the hardware layer and report the initial image data to the media policy Omedia Stratagy module, and the Omedia Stratagy module can send the acquired initial image data to an algorithm module in the algorithm management AlgoManager module for fuzzy filtering processing, and certainly can call the algorithm module in the Algo Manager module to directly perform data filtering processing. These initial image data include, but are not limited to, color coding scheme YUV data, Gyro data, and acceleration Acc data, where Gyro data and Acc data may be obtained by Gyro sensors and Acc sensors of a hardware layer, but may also be obtained by other means.
Step 204, the media policy module sends the final image data to the third party application.
And step 205, the media policy module reports the final image data to the third-party application for object identification processing.
After the initial image data is processed, the algorithm module sends the obtained final image data to the omega Strategy module in communication connection with the algorithm module, and the omega Strategy module sends the final image data to a third-party application which sends an image preview request before, although the omega Strategy module can send the obtained final image data to the Camera HAL module first, and then sends the final image data to the third-party application through the Camera HAL module, so that the third-party application can perform object identification processing according to the final image data.
The object recognition of the image is realized by using a model obtained by training a neural network with a large amount of object data, wherein the object data includes, but is not limited to, object feature information such as object names, object colors, object shapes, and the like. And before the final image data is delivered to a third-party application for object recognition, the final image data is subjected to format conversion once, for example, the final image data in the YUV format is converted into image data in the RGB format or the final image data in the YUV format is converted into a bitmap.
It can be seen that, in this embodiment of the application, first, the third-party application of the electronic device sends an image preview request to the hardware abstraction module; then the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; then the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an algorithm module with enhanced functions, selected by the third-party application through the media service module and requested to be opened by the operating system to the application; the media policy module then sends the final image data to the third party application; and finally, the third-party application carries out object identification processing according to the final image data. Therefore, the embodiment of the application can utilize the media platform OMedia to take more image data information which cannot be obtained by the third-party application and filter the image data information, so that the quality of the data frame reported to the third-party application is ensured, and the identification accuracy of the third-party application is improved.
In one possible example, the initial image data comprises gyroscope data and/or acceleration data corresponding to each frame of image, the pre-enabled algorithm module being configured to: performing graying processing and Laplace transform on the initial image data to obtain a two-dimensional matrix; calculating a mean and/or standard deviation of the two-dimensional matrix; filtering initial image data corresponding to the two-dimensional matrix which does not accord with the preset mean value and/or the preset standard deviation to obtain target image data; and processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data.
After the initial image data is obtained, the algorithm module performs a first-step filtering process on the initial image data, firstly performs graying and laplacian transformation on the initial image data to obtain a two-dimensional matrix, and finally compares the image data corresponding to each frame of image with the mean value and standard deviation of the two-dimensional matrix, and filters out all image data corresponding to the frame of image which does not meet the rule.
The image data obtained initially is three-channel data composed of three components of a color image, and the graying processing is to change the color image of the three-channel data into a grayscale image of single-channel data. The standard deviation, also known as mean square error, is the arithmetic square root of the variance and reflects the degree of dispersion of a data set.
As can be seen, in this example, the graying processing is performed on the initial image data first, so that the data size of the initial image data can be reduced by changing the three-dimensional data into two-dimensional data, which facilitates subsequent processing, and the initial image data can be sharpened by performing laplace transform on the initial image data, which not only can retain the gray values in the image, but also can enhance the contrast at the abrupt change of gray values. And finally, filtering the initial image data through the mean value and the standard deviation to obtain final image data, filtering image data which is closer to the mean value and is not too discrete, and reducing the data volume of the initial image data to be processed on the premise of keeping the characteristics of the original image.
In one possible example, the processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data includes: setting a first threshold value; the algorithm management module judges whether the gyroscope data is larger than the first threshold value; if so, discarding target image data corresponding to the gyroscope data; and if not, taking the target image data as final image data.
When the initial image data is obtained through the Omedia Stratagy module, the Omedia Stratagy module obtains Gyro data from a gyroscope sensor of a hardware layer, filters the Gyro data larger than a first threshold value, retains the Gyro data smaller than the first threshold value and sets the Gyro data as final image data, and the first threshold value can be a specific numerical value or a range.
As can be seen, in this example, the second-step filtering is performed on the target image data according to the Gyro data, so that the quality of the data frame reported to the third-party application can be improved, and the accuracy of the third-party application for performing object identification by using the final image data is higher.
In one possible example, the processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data includes: setting a second threshold value; judging whether the acceleration data is larger than the second threshold value; if so, discarding the target image data corresponding to the acceleration data; and if not, taking the target image data as the final image data.
When the initial image data is acquired through the Omedia Stratagy module, the Omedia Stratagy module acquires Acc data from an acceleration sensor of a hardware layer, filters the Acc data larger than a first threshold value, reserves and sets the Acc data smaller than the first threshold value as final image data, and the first threshold value can be a specific numerical value or a range.
As can be seen, in this example, the second filtering step is performed on the target image data according to the Acc data, so that the quality of the data frame reported to the third-party application can be improved, and the accuracy of the third-party application for performing object identification by using the final image data is higher.
In one possible example, the initial image data includes first gyroscope data corresponding to a first frame image, and second gyroscope data and first acceleration data corresponding to a second frame image, and the media policy module invokes the algorithm module to process the initial image data to obtain final image data, including: calculating a moving distance of the second frame image relative to the first frame image according to the first acceleration data; judging whether the moving distance of the second frame image is integral multiple of the width of the first frame image, wherein the width of the first frame image is equal to the width of the second frame image; and if so, reserving the second frame image, and obtaining final image data according to the first gyroscope data and the second gyroscope data.
The embodiment can be applied to a panoramic photographing scene, when performing panoramic photographing, a user needs to move an electronic device at a constant speed, otherwise, the quality of a finally synthesized photo is seriously affected, such as severe distortion, and the like, and therefore, whether each obtained frame image is photographed at a constant speed or not needs to be judged, which can be determined by directly comparing the magnitude of Acc data of each frame image, and the moving distance of one frame image corresponding to the acceleration data relative to the previous frame image can be calculated according to the Acc data, and whether the image is moved at a constant speed or not can be judged according to the moving distance. For example, the width of each frame image is set to W, and the moving distance of the second frame image with respect to the first frame image is determined to be N × W from the acceleration data of the second frame image, where N is 0,1,2 …, that is, the moving distance of the second frame image is an integral multiple of the moving distance of the first frame image, and it can be determined that the second frame image can be used for photo synthesis.
It can be seen that, in this example, the relative movement distance of the image is determined according to the acceleration, and then whether the frame image is available is determined according to the relative movement distance, so that it can be ensured that the images for synthesizing the complete photo do not overlap, and the finally synthesized image is not seriously distorted.
In one possible example, the obtaining final image data according to the first gyroscope data and the second gyroscope data includes: the algorithm management module is used for respectively carrying out integral operation on the first gyroscope data and the second gyroscope data to obtain an integral value; determining a rotation angle of the second frame image corresponding to the second gyroscope data relative to the first frame image corresponding to the first gyroscope data according to the integral value; and adjusting the initial image data of the second frame image corresponding to the second gyroscope data according to the rotation angle to obtain final image data.
Each frame of image has corresponding YUV data and/or Gyro data and/or Acc data, the initial image data includes all corresponding data of images of different frames, and specifically, in a panoramic shooting scene, a plurality of frames of images can be filtered according to Gyro data. The rotation angle of the second frame image compared with the first frame image can be determined according to the integral value calculated by the Gyro data of the second frame image data, then the related image data of the second frame image is adjusted to enable the rotation angle of the second frame image compared with the previous frame image to be smaller than a preset value, the adjusted initial image data is set as final image data, and the final image data is sent to a third party application for object identification, wherein the image filtered by the Gyro data can be image data reserved after being screened according to the Acc data.
As can be seen, in this example, the selection angle of the next image compared to the previous image is determined according to the Gyro data in the initial image data, or the selection angle of the subsequent image compared to the first image is determined, and the initial image data adjusted according to the rotation angle is determined as the final image data, so that the quality of the data frame reported to the third-party application can be improved, and the accuracy of the third-party application for performing object identification by using the final image data is higher.
In one possible example, a hardware abstraction layer of the operating system is provided with an algorithm management module; before the third-party application sends an image preview request to the hardware abstraction module, the method further includes: the third-party application sends the enhanced function selected to be opened to the media service module; the media service module receives the enhanced function selected to be opened and sends the enhanced function selected to be opened to the algorithm management module through the media strategy module; the algorithm management module enables the algorithm module of the enhanced function selected to be opened.
The media management system comprises a media Service Omedia Service module, an Omedia Service module, a Camera HAL module and an Omedia Service HAL module, wherein the third-party application can send a request to the media Service Omedia Service module, then the Omedia Service module sends version information of a media platform to the third-party application according to the request, the third-party application selects an enhancement function required to be used, then the Omedia Service module is used for enabling an algorithm module in the Algo Manager module, namely, the Omedia Service module can establish communication connection with the third-party application to obtain an image obtaining request of the third-party application, after the request is analyzed by the Omedia Service module, information of functions of the algorithm module required to be used can be transmitted to the Camera HAL module, and related information is sent to the Omedia Service module through the Camera HAL module to enable the algorithm. The related algorithm of the enhanced function is not opened for the third party application before the algorithm module is not enabled, that is, only after the algorithm module is enabled through the Omedia Service module, the algorithm of the bottom layer core function can be used by the third party application. The media management module can be used for sending control information such as an image preview request to the media management module and sending the image preview request to the Omedia Service module through the media management module.
As can be seen, in this example, the media service module enables the third-party application to use the enhanced function of the operating system bottom layer, but the bottom layer is not directly open to the third-party application, so that the security can be effectively controlled, and the safe opening of the bottom layer function is facilitated.
The following examples are given.
As shown in fig. 2-2, the initial image data includes Acc data and Gyro data, and after the target image data is obtained by performing the first filtering in the algorithm module, the second filtering is performed in the algorithm module, first, a threshold value Th1 is set for the Gyro data, a threshold value Th2 is set for the Acc data, it is determined whether the Gyro data is greater than Th1, if the Gyro data is greater than Th1, all image data of a frame of image corresponding to the Gyro data is discarded, if the Gyro data is less than Th1, it is continuously determined whether the Acc data is greater than Th2, if the Gyro data is greater than Th2, all image data of the frame of image corresponding to the Acc data is discarded, and if the Gyro data is less than Th2, the image data of the frame corresponding to the Acc data is set as the final image data. Meanwhile, final image data are determined according to the Gyro data and the Acc data, so that the quality of data frames reported to third-party applications can be improved, and the accuracy of object identification by the third-party applications by using the final image data is higher.
Referring to fig. 3, fig. 3 is a flowchart illustrating another application optimization method according to an embodiment of the present disclosure, where the application optimization method can be applied to the electronic device shown in fig. 1.
As shown, the application optimization method includes the following operations:
step 301, a third party application sends an image preview request to the hardware abstraction module;
step 302, the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media policy module;
step 303, the media policy module receives the initial image data, and invokes a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an enhanced function algorithm module selected by the third party application through the media service module and requesting the operating system to be open to the application;
304, performing graying processing and Laplace transform on the initial image data to obtain a two-dimensional matrix;
step 305, calculating a mean value and/or a standard deviation of the two-dimensional matrix;
step 306, filtering initial image data corresponding to the two-dimensional matrix which does not conform to the preset mean value and/or the preset standard deviation to obtain target image data;
step 307, processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data;
step 308, the media policy module sends the final image data to the third party application;
step 309, the third party application performs object recognition processing according to the final image data.
It can be seen that, in the embodiment of the application, the object identification can be performed in the third-party application after the acquired image data is processed according to the YUV data, the Gyro data, the Acc data and the like, so that not only is the quality of the data frame reported to the application ensured, but also the identification accuracy of the application is improved.
Consistent with the embodiments shown in fig. 2-1 and fig. 3, please refer to fig. 4, and fig. 4 is a schematic structural diagram of an electronic device 400 provided in an embodiment of the present application, and as shown in the figure, the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for executing any step in the foregoing method embodiments.
In one possible example, the program 421 includes instructions for performing the following steps: the third party application sends an image preview request to the hardware abstraction module; the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an enhanced function algorithm module which is selected by the third-party application through the media service module and requests the operating system to be open to the application; the media policy module sending the final image data to the third party application; and the third-party application carries out object identification processing according to the final image data.
In one possible example, the initial image data comprises gyroscope data and/or acceleration data corresponding to each frame of image, the pre-enabled algorithm module being configured to perform the following operations by instructions in the program 421: carrying out graying processing and Laplace transformation on the initial image data to obtain a two-dimensional matrix; calculating a mean and/or standard deviation of the two-dimensional matrix; filtering initial image data corresponding to the two-dimensional matrix which does not accord with the preset mean value and/or the preset standard deviation to obtain target image data; and processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data.
In one possible example, in processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data, the instructions in the program 421 are specifically configured to perform the following operations: setting a first threshold value; judging whether the gyroscope data is larger than the first threshold value; if so, discarding target image data corresponding to the gyroscope data; and if not, taking the target image data as final image data.
In one possible example, in processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data, the instructions in the program 421 are specifically configured to perform the following operations: setting a second threshold value; judging whether the acceleration data is larger than the second threshold value; if so, discarding the target image data corresponding to the acceleration data; and if not, taking the target image data as the final image data.
In one possible example, the initial image data includes first gyroscope data corresponding to a first frame image, and second gyroscope data and first acceleration data corresponding to a second frame image, and in terms of the media policy module invoking the algorithm module to process the initial image data to obtain final image data, the instructions in the program 421 are specifically configured to perform the following operations: calculating a moving distance of the second frame image relative to the first frame image according to the first acceleration data; judging whether the moving distance of the second frame image is integral multiple of the width of the first frame image, wherein the width of the first frame image is equal to the width of the second frame image; and if so, reserving the second frame image, and obtaining final image data according to the first gyroscope data and the second gyroscope data.
In one possible example, in the aspect of obtaining the final image data according to the first gyroscope data and the second gyroscope data, the instructions in the program 421 are specifically configured to: respectively carrying out integral operation on the first gyroscope data and the second gyroscope data to obtain integral values; determining a rotation angle of the second frame image corresponding to the second gyroscope data relative to the first frame image corresponding to the first gyroscope data according to the integral value; and adjusting the initial image data of the second frame image corresponding to the second gyroscope data according to the rotation angle to obtain final image data.
In one possible example, a hardware abstraction layer of the operating system is provided with an algorithm management module; before the third-party application sends an image preview request to the hardware abstraction module, the instructions in the program 421 are specifically configured to: the third-party application sends the enhanced function selected to be opened to the media service module; the media service module receives the enhanced function selected to be opened and sends the enhanced function selected to be opened to the algorithm management module through the media strategy module; the algorithm management module enables the algorithm module of the enhanced function selected to be opened. The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process.
It is understood that the electronic device comprises corresponding hardware structures and/or software modules for performing the respective functions in order to realize the above-mentioned functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the electronic device may be divided into the functional units according to the method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 5 is a block diagram of functional units of an application optimization apparatus 500 according to an embodiment of the present disclosure. The application optimization device 500 is applied to an electronic device, the electronic device includes a media service module and an operating system, an application layer of the operating system is provided with a third party application, a hardware abstraction layer of the operating system is provided with a hardware abstraction module and a media policy module, the device includes a processing unit and a communication unit, wherein,
the processing unit is used for sending an image preview request to the hardware abstraction module through the communication unit according to the third-party application; the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an algorithm module with enhanced functions, selected by the third-party application through the media service module and requested to be opened by the operating system to the application; and the media policy module sending the final image data to the third party application; and the third-party application performs object recognition processing according to the final image data.
In one possible example, the initial image data includes gyroscope data and/or acceleration data corresponding to each frame of image, and the pre-enabled algorithm module is configured to perform graying processing and laplace transform on the initial image data to obtain a two-dimensional matrix according to the following operations performed by the processing unit 501; calculating a mean and/or standard deviation of the two-dimensional matrix; filtering initial image data corresponding to the two-dimensional matrix which does not accord with the preset mean value and/or the preset standard deviation to obtain target image data; and processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data.
In a possible example, in the aspect of processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data, the processing unit 501 is specifically configured to set a first threshold; judging whether the gyroscope data is larger than the first threshold value; if so, discarding target image data corresponding to the gyroscope data; and if not, taking the target image data as final image data.
In one possible example, in the aspect of processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data, the processing unit 501 is specifically configured to set a second threshold; judging whether the acceleration data is larger than the second threshold value; if so, discarding the target image data corresponding to the acceleration data; and if not, taking the target image data as the final image data.
In a possible example, the initial image data includes first gyroscope data corresponding to a first frame image, and second gyroscope data and first acceleration data corresponding to a second frame image, and in terms of the media policy module invoking the algorithm module to process the initial image data to obtain final image data, the processing unit 501 is specifically configured to calculate a moving distance of the second frame image relative to the first frame image according to the first acceleration data; judging whether the moving distance of the second frame image is integral multiple of the width of the first frame image, wherein the width of the first frame image is equal to the width of the second frame image; and if so, reserving the second frame image, and obtaining final image data according to the first gyroscope data and the second gyroscope data.
In one possible example, in terms of obtaining final image data according to the first gyroscope data and the second gyroscope data, the processing unit 501 is specifically configured to perform an integration operation on the first gyroscope data and the second gyroscope data respectively to obtain an integrated value; determining a rotation angle of the second frame image corresponding to the second gyroscope data relative to the first frame image corresponding to the first gyroscope data according to the integral value; and adjusting the initial image data of the second frame image corresponding to the second gyroscope data according to the rotation angle to obtain final image data.
In a possible example, a hardware abstraction layer of the operating system is provided with an algorithm management module, and before the third-party application sends an image preview request to the hardware abstraction module, the processing unit 501 is specifically configured to send the enhanced function selected to be opened to the media service module by the third-party application; the media service module receives the enhanced function selected to be opened and sends the enhanced function selected to be opened to the algorithm management module through the media strategy module; the algorithm management module enables the algorithm module of the enhanced function selected to be opened.
The application optimization apparatus 500 may further include a storage unit 503 for storing program codes and data of the electronic device. The processing unit 501 may be a processor, the communication unit 502 may be a touch display screen or a transceiver, and the storage unit 503 may be a memory.
It can be understood that, since the method embodiment and the apparatus embodiment are different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be synchronously adapted to the apparatus embodiment portion, and is not described herein again.
Embodiments of the present application further provide a chip, where the chip includes a processor, configured to call and run a computer program from a memory, so that a device in which the chip is installed performs some or all of the steps described in the electronic device in the above method embodiments.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. The application optimization method is applied to electronic equipment, the electronic equipment comprises a media service module and an operating system, a third party application is arranged on an application layer of the operating system, and a hardware abstraction module and a media strategy module are arranged on a hardware abstraction layer of the operating system; the method comprises the following steps:
the third party application sends an image preview request to the hardware abstraction module;
the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module;
the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an enhanced function algorithm module which is selected by the third-party application through the media service module and requests the operating system to be open to the application;
the media policy module sending the final image data to the third party application;
and the third-party application carries out object identification processing according to the final image data.
2. The method of claim 1, wherein the initial image data comprises gyroscope data and/or acceleration data corresponding to each frame of image, the pre-enabled algorithm module being configured to:
carrying out graying processing and Laplace transformation on the initial image data to obtain a two-dimensional matrix;
calculating a mean and/or standard deviation of the two-dimensional matrix;
filtering initial image data corresponding to the two-dimensional matrix which does not accord with the preset mean value and/or the preset standard deviation to obtain target image data;
and processing the target image data according to the gyroscope data and/or the acceleration data to obtain final image data.
3. The method of claim 2, wherein processing the target image data according to the gyroscope data and/or acceleration data to obtain final image data comprises:
setting a first threshold value;
judging whether the gyroscope data is larger than the first threshold value;
if so, discarding target image data corresponding to the gyroscope data;
and if not, taking the target image data as final image data.
4. The method of claim 2, wherein processing the target image data according to the gyroscope data and/or acceleration data to obtain final image data comprises:
setting a second threshold value;
judging whether the acceleration data is larger than the second threshold value;
if so, discarding the target image data corresponding to the acceleration data;
and if not, taking the target image data as the final image data.
5. The method of any of claims 1-4, wherein the initial image data comprises first gyroscope data corresponding to a first frame of image, and second gyroscope data and first acceleration data corresponding to a second frame of image, and wherein the media policy module invokes the algorithm module to process the initial image data to obtain final image data, comprising:
calculating a moving distance of the second frame image relative to the first frame image according to the first acceleration data;
judging whether the moving distance of the second frame image is integral multiple of the width of the first frame image, wherein the width of the first frame image is equal to the width of the second frame image;
and if so, reserving the second frame image, and obtaining final image data according to the first gyroscope data and the second gyroscope data.
6. The method of claim 5, wherein said deriving final image data from said first gyroscope data and said second gyroscope data comprises:
respectively carrying out integral operation on the first gyroscope data and the second gyroscope data to obtain integral values;
determining a rotation angle of the second frame image corresponding to the second gyroscope data relative to the first frame image corresponding to the first gyroscope data according to the integral value;
and adjusting the initial image data of the second frame image corresponding to the second gyroscope data according to the rotation angle to obtain final image data.
7. The method according to any one of claims 1 to 6, wherein a hardware abstraction layer of the operating system is provided with an algorithm management module; before the third-party application sends an image preview request to the hardware abstraction module, the method further includes:
the third-party application sends the enhanced function selected to be opened to the media service module;
the media service module receives the enhanced function selected to be opened and sends the enhanced function selected to be opened to the algorithm management module through the media strategy module;
the algorithm management module enables the algorithm module of the enhanced function selected to be opened.
8. An application optimization device is applied to an electronic device, the electronic device comprises a media service module and an operating system, an application layer of the operating system is provided with a third-party application, a hardware abstraction layer of the operating system is provided with a hardware abstraction module and a media policy module, the device comprises a processing unit and a communication unit, wherein,
the processing unit is used for sending an image preview request to the hardware abstraction module through the communication unit according to the third-party application; the hardware abstraction module calls a bottom layer driver to collect initial image data and sends the initial image data to the media strategy module; the media strategy module receives the initial image data and calls a pre-enabled algorithm module to process the initial image data to obtain final image data, wherein the algorithm module is an algorithm module with enhanced functions, selected by the third-party application through the media service module and requested to be opened by the operating system to the application; and the media policy module sending the final image data to the third party application; and the third-party application performs object recognition processing according to the final image data.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-7.
10. A computer storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any of claims 1-7.
CN201911252510.7A 2019-12-09 2019-12-09 Application optimization method and related product Active CN110942047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252510.7A CN110942047B (en) 2019-12-09 2019-12-09 Application optimization method and related product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252510.7A CN110942047B (en) 2019-12-09 2019-12-09 Application optimization method and related product

Publications (2)

Publication Number Publication Date
CN110942047A true CN110942047A (en) 2020-03-31
CN110942047B CN110942047B (en) 2023-07-07

Family

ID=69909616

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252510.7A Active CN110942047B (en) 2019-12-09 2019-12-09 Application optimization method and related product

Country Status (1)

Country Link
CN (1) CN110942047B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103534726A (en) * 2011-05-17 2014-01-22 苹果公司 Positional sensor-assisted image registration for panoramic photography
GB201406926D0 (en) * 2014-04-17 2014-06-04 Nokia Corp A device orientation correction method for panorama images
WO2017075788A1 (en) * 2015-11-05 2017-05-11 华为技术有限公司 Anti-jitter photographing method and apparatus, and camera device
US20170148017A1 (en) * 2015-11-23 2017-05-25 Xiaomi Inc. Biological recognition technology-based mobile payment device, method and apparatus, and storage medium
CN107172345A (en) * 2017-04-07 2017-09-15 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN109325468A (en) * 2018-10-18 2019-02-12 广州智颜科技有限公司 A kind of image processing method, device, computer equipment and storage medium
CN109462732A (en) * 2018-10-29 2019-03-12 努比亚技术有限公司 A kind of image processing method, equipment and computer readable storage medium
CN109981724A (en) * 2019-01-28 2019-07-05 上海左岸芯慧电子科技有限公司 A kind of internet-of-things terminal based on block chain, artificial intelligence system and processing method
CN110086967A (en) * 2019-04-10 2019-08-02 Oppo广东移动通信有限公司 Image processing method, image processor, filming apparatus and electronic equipment
CN110164545A (en) * 2019-04-15 2019-08-23 中国平安财产保险股份有限公司 Data aid in treatment method, apparatus, computer equipment and storage medium
CN110177218A (en) * 2019-06-28 2019-08-27 广州鲁邦通物联网科技有限公司 A kind of image processing method of taking pictures of Android device

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103534726A (en) * 2011-05-17 2014-01-22 苹果公司 Positional sensor-assisted image registration for panoramic photography
GB201406926D0 (en) * 2014-04-17 2014-06-04 Nokia Corp A device orientation correction method for panorama images
WO2017075788A1 (en) * 2015-11-05 2017-05-11 华为技术有限公司 Anti-jitter photographing method and apparatus, and camera device
US20170148017A1 (en) * 2015-11-23 2017-05-25 Xiaomi Inc. Biological recognition technology-based mobile payment device, method and apparatus, and storage medium
CN107172345A (en) * 2017-04-07 2017-09-15 深圳市金立通信设备有限公司 A kind of image processing method and terminal
CN109325468A (en) * 2018-10-18 2019-02-12 广州智颜科技有限公司 A kind of image processing method, device, computer equipment and storage medium
CN109462732A (en) * 2018-10-29 2019-03-12 努比亚技术有限公司 A kind of image processing method, equipment and computer readable storage medium
CN109981724A (en) * 2019-01-28 2019-07-05 上海左岸芯慧电子科技有限公司 A kind of internet-of-things terminal based on block chain, artificial intelligence system and processing method
CN110086967A (en) * 2019-04-10 2019-08-02 Oppo广东移动通信有限公司 Image processing method, image processor, filming apparatus and electronic equipment
CN110164545A (en) * 2019-04-15 2019-08-23 中国平安财产保险股份有限公司 Data aid in treatment method, apparatus, computer equipment and storage medium
CN110177218A (en) * 2019-06-28 2019-08-27 广州鲁邦通物联网科技有限公司 A kind of image processing method of taking pictures of Android device

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
CORAVOS, ANDREA等: "Developing and adopting safe and effective digital biomarkers to improve patient outcomes", 《NPJ DIGITAL MEDICINE》, vol. 2, no. 1, pages 2398 - 6352 *
LEE HC等: "Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor", 《PLOS ONE》 *
LEE HC等: "Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor", 《PLOS ONE》, 3 November 2017 (2017-11-03), pages 1 - 12 *
全永彬等: "基于惯性传感器和视觉传感器的室内定位研究", 《东莞理工学院学报》 *
全永彬等: "基于惯性传感器和视觉传感器的室内定位研究", 《东莞理工学院学报》, 17 April 2019 (2019-04-17), pages 91 - 95 *
刘钊: "公共安防信息综合应用平台的设计与实现", 《中国优秀硕士学位论文全文数据库:信息科技辑》, no. 11, pages 136 - 647 *

Also Published As

Publication number Publication date
CN110942047B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
EP3611915B1 (en) Method and apparatus for image processing
US20190130169A1 (en) Image processing method and device, readable storage medium and electronic device
WO2020125631A1 (en) Video compression method and apparatus, and computer-readable storage medium
CN108024065B (en) Terminal shooting method, terminal and computer readable storage medium
KR20210149848A (en) Skin quality detection method, skin quality classification method, skin quality detection device, electronic device and storage medium
CN108234882B (en) Image blurring method and mobile terminal
WO2021078001A1 (en) Image enhancement method and apparatus
CN110995994A (en) Image shooting method and related device
CN109040596B (en) Method for adjusting camera, mobile terminal and storage medium
CN110958399B (en) High dynamic range image HDR realization method and related product
CN110991369A (en) Image data processing method and related device
JP2004310475A (en) Image processor, cellular phone for performing image processing, and image processing program
CN111814564A (en) Multispectral image-based living body detection method, device, equipment and storage medium
CN109615620B (en) Image compression degree identification method, device, equipment and computer readable storage medium
KR102273059B1 (en) Method, apparatus and electronic device for enhancing face image
US11605220B2 (en) Systems and methods for video surveillance
CN112446254A (en) Face tracking method and related device
CN110933314B (en) Focus-following shooting method and related product
CN113610884A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111161299A (en) Image segmentation method, computer program, storage medium, and electronic device
WO2024001617A1 (en) Method and apparatus for identifying behavior of playing with mobile phone
CN116055895B (en) Image processing method and device, chip system and storage medium
CN110942047B (en) Application optimization method and related product
CN111383255B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110766631A (en) Face image modification method and device, electronic equipment and computer readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant