CN110991368B - Camera scene recognition method and related device - Google Patents

Camera scene recognition method and related device Download PDF

Info

Publication number
CN110991368B
CN110991368B CN201911252533.8A CN201911252533A CN110991368B CN 110991368 B CN110991368 B CN 110991368B CN 201911252533 A CN201911252533 A CN 201911252533A CN 110991368 B CN110991368 B CN 110991368B
Authority
CN
China
Prior art keywords
algorithm
module
scene detection
detection result
scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911252533.8A
Other languages
Chinese (zh)
Other versions
CN110991368A (en
Inventor
方攀
陈岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jinsheng Communication Technology Co ltd
Original Assignee
Shanghai Jinsheng Communication Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jinsheng Communication Technology Co ltd filed Critical Shanghai Jinsheng Communication Technology Co ltd
Priority to CN201911252533.8A priority Critical patent/CN110991368B/en
Publication of CN110991368A publication Critical patent/CN110991368A/en
Application granted granted Critical
Publication of CN110991368B publication Critical patent/CN110991368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the application discloses a camera scene recognition method and a related device, wherein the method comprises the following steps: the third party application sends a scene detection request to a hardware abstraction layer of the operating system; the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. According to the embodiment of the application, the efficiency and the accuracy of camera scene recognition are improved.

Description

Camera scene recognition method and related device
Technical Field
The application relates to the technical field of electronic equipment, in particular to a camera scene recognition method and a related device.
Background
At present, various camera application software is widely applied to electronic equipment, and as the requirements of users on camera application processing data are higher, scene detection is used for third party applications, and unified API access to the underlying scene detection capability does not exist at present. And the performance of the deep learning network is poor by using the CPU, and the preview picture is blocked by running a scene recognition algorithm. The third party application can only integrate the scene recognition algorithm by itself so as to meet the requirements of users.
Disclosure of Invention
The embodiment of the application provides a camera scene recognition method and a related device, so as to improve the efficiency and accuracy of camera scene recognition.
In a first aspect, an embodiment of the present application provides a method for identifying a camera scene, where the third party application sends a scene detection request to a hardware abstraction layer of the operating system;
the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module;
And the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In a second aspect, an embodiment of the present application provides a camera scene recognition device, which is applied to an electronic device, where the electronic device includes a media service module and an operating system, and an application layer of the operating system is provided with a third party application; the apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for sending a scene detection request to a hardware abstraction layer of the operating system by the third party application; the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the programs including instructions for performing steps in any of the methods of the first aspect of the embodiments of the present application.
In a fourth aspect, embodiments of the present application provide a chip, including: and a processor for calling and running a computer program from the memory, so that the device on which the chip is mounted performs some or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps as described in any of the methods of the first aspect of the embodiments of the present application.
In a sixth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in any of the methods of the first aspect of embodiments of the present application. The computer program product may be a software installation package.
It can be seen that, in this embodiment of the present application, a third party application in an electronic device sends a scene detection request to a hardware abstraction layer of the operating system, and then, the hardware abstraction layer receives the scene detection request, obtains an original scene recognition data frame to be processed, invokes an algorithm for implementing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, where the algorithm for scene recognition is that the third party application requests, in advance, the operating system is opened for the third party application through the media service module, and finally, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. Therefore, a more efficient customizable scene recognition method is provided for the third party application, and the third party application customizes a target algorithm according to the scene, so that the intelligence and the accuracy of the camera scene recognition method are improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the following description will briefly introduce the drawings that are needed in the embodiments or the description of the prior art, it is obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 2 is a flowchart of a method for identifying a camera scene according to an embodiment of the present application;
fig. 3 is a flowchart of another method for identifying a camera scene according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a functional unit composition block diagram of a camera scene recognition device according to an embodiment of the present application.
Detailed Description
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those listed steps or elements but may include other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
The electronic device according to the embodiment of the present application may be an electronic device with communication capability, where the electronic device may include various handheld devices, vehicle-mounted devices, wearable devices, computing devices, or other processing devices connected to a wireless modem, and various types of User Equipment (UE), mobile Station (MS), terminal device (terminal device), and so on.
Currently, the Android platform, the three-party camera application can access the underlying camera data through the standard Android application program interface (Application Programming Interface, API), but if the user wants to use more enhancement functions of the underlying layer or the image processed through the algorithm, the user does not have the corresponding standard interface to map the capability of the underlying layer to the three-party access. But how to guarantee security after opening the functions of the underlying core is a very important part, current schemes are authorized by means of whitelists or the like.
In the traditional method, after receiving YUV, an Android third party application App performs scene detection, and the bottom hardware acceleration cannot be obtained through an API of Google native. The performance of the deep learning network is poor by using the CPU, and the preview picture is blocked by running a scene recognition algorithm. The GPU and DSP of the platform cannot be used to accelerate the computation. In the scene recognition type AI technology, the single frame processing time of the MobileNetv2 using the CPU is about 200ms, taking the MTK platform MT6760 as an example. When a third party Camera runs a network model, the performance of the mobile phone can be seriously affected, so that the Camera previews are blocked.
In view of the foregoing, embodiments of the present application provide a method and related apparatus for identifying a camera scene, and the embodiments of the present application are described in detail below with reference to the accompanying drawings.
As shown in fig. 1, an electronic device 100 according to an embodiment of the present application includes a media service module and an operating system, where the operating system may be an android system, an application layer of the operating system is provided with a third party application and a media management module (also referred to as a media interface module), a hardware abstraction layer of the operating system is provided with a hardware abstraction module (this is an android native module, such as a native camera hardware abstraction module, a media policy module and an algorithm management module, and furthermore, the operating system native architecture further includes a framework layer, a driver layer, where the framework layer includes application interfaces (such as a native camera application program interface) of various native applications, application services (such as a native camera service), a framework layer interface (such as a Google HAL3 interface), and the hardware abstraction layer includes a hardware abstraction layer interface (such as a HAL 3.0), a hardware module of various native applications (such as a camera hardware abstraction module), and the driver layer includes various drivers (such as a screen Display driver, an Audio driver, etc.), and the driver layer is used to enable various hardware of the electronic device, such as an image signal processor+a front-end image sensor, and the like.
The media service module is independent of the operating system, the third party application can communicate with the media service module through the media management module, the media service module can communicate with the media policy module through an android native information link composed of an application interface, an application service, a framework layer interface, a hardware abstraction layer interface and a hardware abstraction module, the media policy module communicates with the algorithm management module, the algorithm management module maintains an android native algorithm library, the algorithm library comprises enhancement functions supported by various native applications, such as for native camera applications, the enhancement functions of binocular shooting, beautifying, sharpening, night scenes and the like are supported. In addition, the media service module may also communicate directly with the media policy module or the algorithm management module.
Based on the architecture, the media service module can enable the algorithm module in the algorithm library through the android native information link, the media policy module and the algorithm management module, or directly enable the algorithm module in the algorithm library through the algorithm management module, so that the enhancement function of opening native application association for the third party application is realized.
Based on the architecture, the media service module can call the application driver to enable certain hardware through an android native information link, or through a first information link formed by the media policy module and the hardware abstraction module, or through a second information link formed by the media policy module, the algorithm management module and the hardware abstraction module, so that the hardware associated with the native application is opened for the third party application.
Referring to fig. 2, fig. 2 is a flowchart of a camera scene recognition method according to an embodiment of the present application, where the camera scene recognition method may be applied to the electronic device shown in fig. 1.
As shown in the figure, the present camera scene recognition method includes the following operations.
S201, the third party application sends a scene detection request to a hardware abstraction layer of the operating system.
S202, the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module.
And S203, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
Specifically, the scheme is based on an Omedia framework, and the third party application calls the underlying deep learning network to perform scene recognition through the Omedia framework and transmits the recognized scene back to the third party application. The identified scenes may cooperate with third party applications that may customize the scenes they want (e.g., backlight, night scenes, blue sky, etc.).
It can be seen that, in this embodiment of the present application, a third party application in an electronic device sends a scene detection request to a hardware abstraction layer of the operating system, and then, the hardware abstraction layer receives the scene detection request, obtains an original scene recognition data frame to be processed, invokes an algorithm for implementing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, where the algorithm for scene recognition is that the third party application requests, in advance, the operating system is opened for the third party application through the media service module, and finally, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. Therefore, a more efficient customizable scene recognition method is provided for the third party application, and the third party application customizes a target algorithm according to the scene, so that the intelligence and the accuracy of the camera scene recognition method are improved.
In one possible example, a hardware abstraction layer of the operating system is provided with a media policy module; the third party application sending a scene detection request to a hardware abstraction layer of the operating system, comprising: the third party application sends a scene detection request to the media service module; the media service module receives the scene detection request and issues the scene detection request to the media policy module; the media policy module receives the scene detection request and simultaneously issues the scene detection request to a bottom layer driver.
The media service module is an European-Perot media platform service module, and the media strategy module is a request calling module.
The third party application is in communication connection with the OMedia SDK interface, the OMedia SDK interface is in communication connection with the media service module, the media service module is in communication connection with the media policy module, and the media policy module is in communication connection with the algorithm management module.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the algorithm management module through the media policy module; the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame, and obtains a target scene detection result, and the method comprises the following steps: the media service module analyzes the scene detection request to obtain first information, and issues the first information to the media policy module; the media policy module receives the first information and simultaneously issues the first information to a bottom layer driver; the media strategy module receives an original scene identification data frame reported by the bottom layer driver; the media policy module transmits the original scene identification data frame to the algorithm management module; and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
The first information may be scene identification information, scene change information, etc., which is not limited herein.
In one possible example, the algorithm management module invokes the scene recognition algorithm to detect and accelerate the original scene recognition data frame to obtain a target scene detection result, including: the algorithm management module invokes a preset hardware module to accelerate the preset algorithm to obtain an accelerated preset algorithm, wherein the preset hardware module comprises DSP, NPU, GPU; and the algorithm management module invokes the accelerated preset algorithm to detect the scene recognition data frame to obtain a target scene detection result.
The preset algorithm may be a scene recognition algorithm. And are not limited in this regard.
In this example, the scene detection result of the third party application is determined through the preset algorithm, and according to the scene detection result, the third party application can set the scene requirement by itself, so that the intelligence and accuracy of the camera scene recognition method are improved.
In one possible example, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, photographs according to the target photographing algorithm, and includes: inquiring a preset database to obtain a target algorithm corresponding to the target scene detection result, wherein the preset database comprises a mapping relation corresponding to the scene detection result and the algorithm.
The mapping relationship may be one-to-one, one-to-many, and many-to-many, which is not limited herein.
In this example, different target algorithms are obtained according to different scene detection results, so that diversity and accuracy of the scene detection algorithm are improved.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the algorithm management module through the media policy module; the third party application receiving the target scene detection result comprises: the algorithm management module reports the target scene detection result to the media policy module; the media policy module reports the target scene detection result to a hardware abstraction module; the hardware abstraction module reports the target scene detection result to an information callback module; and the third party application receives the target scene detection result reported by the information callback module.
The information callback module is in communication connection with the third party application, and the information callback module is in communication connection with the media service module.
In this example, the scene detection result is accurately and rapidly fed back to the third party application through the information interaction before the modules, so that the intelligence and the accuracy of the camera scene recognition method are improved.
In one possible example, the media policy module receives a scene identification data frame from the underlying driver report, including: the bottom driver reports the scene recognition data frame to the hardware abstraction module; the hardware abstraction module reports the scene recognition data frame to the media strategy module.
The underlying driver may include, but is not limited to, a sensor module and an image signal processing module, where the sensor module is communicatively coupled to the image signal processing module, the image signal processing module is communicatively coupled to a hardware abstraction module, and the hardware abstraction module is communicatively coupled to a media policy module.
It can be seen that in this example, by building a pathway, it is possible for third party applications to use computationally complex scene detection algorithms; providing a more efficient customizable scenario recognition solution for third party applications; the customizable scene detection interface is provided for the third party application, and the third party application can use different algorithms according to the scene, so that the intelligence of the third party camera is improved.
Referring to fig. 3, fig. 3 is a flowchart illustrating another method for identifying a camera scene according to an embodiment of the present application, where the method for identifying a camera scene may be applied to the electronic device shown in fig. 1.
As shown in the figure, the present camera scene recognition method includes the following operations:
s301, the third party application sends a scene detection request to the media service module.
S302, the media service module receives the scene detection request and issues the scene detection request to the media policy module.
S303, the media policy module receives the scene detection request and simultaneously issues the scene detection request to a bottom layer driver.
S304, the media service module analyzes the scene detection request to obtain first information, and issues the first information to the media policy module.
And S305, the media strategy module receives the first information and simultaneously issues the first information to a bottom layer driver.
And S306, the media strategy module receives the original scene identification data frame reported by the bottom layer driver.
S307, the media strategy module transmits the original scene identification data frame to the algorithm management module.
S308, the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
S309, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
It can be seen that, in this embodiment of the present application, a third party application in an electronic device sends a scene detection request to a hardware abstraction layer of the operating system, and then, the hardware abstraction layer receives the scene detection request, obtains an original scene recognition data frame to be processed, invokes an algorithm for implementing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, where the algorithm for scene recognition is that the third party application requests, in advance, the operating system is opened for the third party application through the media service module, and finally, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm. Therefore, a more efficient customizable scene recognition method is provided for the third party application, and the third party application customizes a target algorithm according to the scene, so that the intelligence and the accuracy of the camera scene recognition method are improved.
Referring to fig. 4, in accordance with the embodiments shown in fig. 2 and fig. 3, fig. 4 is a schematic structural diagram of an electronic device 400 provided in an embodiment of the present application, where as shown in the fig. 4, the electronic device 400 includes an application processor 410, a memory 420, a communication interface 430, and one or more programs 421, where the one or more programs 421 are stored in the memory 420 and configured to be executed by the application processor 410, and the one or more programs 421 include instructions for executing any of the steps in the method embodiments.
In one possible example, the program 421 includes instructions for performing the steps of: the third party application sends a scene detection request to a hardware abstraction layer of the operating system; the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a media policy module; in terms of the third party application sending a scene detection request to the hardware abstraction layer of the operating system, the instructions in the program 421 are specifically configured to: the third party application sends a scene detection request to the media service module; the media service module receives the scene detection request and issues the scene detection request to the media policy module; the media policy module receives the scene detection request and simultaneously issues the scene detection request to a bottom layer driver.
In one possible example, the hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, and the hardware abstraction module is connected with the algorithm management module through the media policy module; receiving the scene detection request at the hardware abstraction layer, obtaining an original scene recognition data frame to be processed, and calling an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, wherein the instructions in the program 421 are specifically configured to execute the following operations: the media service module analyzes the scene detection request to obtain first information, and issues the first information to the media policy module; the media policy module receives the first information and simultaneously issues the first information to a bottom layer driver; the media strategy module receives an original scene identification data frame reported by the bottom layer driver; the media policy module transmits the original scene identification data frame to the algorithm management module; and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
In one possible example, in the aspect that the algorithm management module invokes the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain the target scene detection result, the instructions in the program 421 are specifically configured to: the algorithm management module invokes a preset hardware module to accelerate the preset algorithm to obtain an accelerated preset algorithm, wherein the preset hardware module comprises DSP, NPU, GPU; and the algorithm management module invokes the accelerated preset algorithm to detect the scene recognition data frame to obtain a target scene detection result.
In one possible example, the program 421 includes instructions for receiving the target scene detection result at the third party application, obtaining a target photographing algorithm according to the target scene detection result, and performing photographing according to the target photographing algorithm: inquiring a preset database to obtain a target algorithm corresponding to the target scene detection result, wherein the preset database comprises a mapping relation corresponding to the scene detection result and the algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the algorithm management module through the media policy module; in aspects where the third party application receives the target scene detection result, the program 421 includes instructions for: the algorithm management module reports the target scene detection result to the media policy module; the media policy module reports the target scene detection result to a hardware abstraction module; the hardware abstraction module reports the target scene detection result to an information callback module; and the third party application receives the target scene detection result reported by the information callback module.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It will be appreciated that the electronic device, in order to achieve the above-described functions, includes corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the application may divide the functional units of the electronic device according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 5 is a functional unit block diagram of a camera scene recognition apparatus 500 related in the embodiment of the present application. The camera scene recognition device 500 is applied to electronic equipment, wherein the electronic equipment comprises a media service module and an operating system, and a third party application is arranged on an application layer of the operating system; the apparatus comprises a processing unit 501 and a communication unit 502, wherein the processing unit 501 is configured to perform any step of the above method embodiments, and when performing data transmission such as sending, the communication unit 503 is selectively invoked to complete a corresponding operation. The following is a detailed description.
The processing unit 501 is configured to send a scene detection request to a hardware abstraction layer of the operating system by using the third party application;
the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module;
And the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a media policy module; in the aspect that the third party application sends a scene detection request to a hardware abstraction layer of the operating system, the processing unit 501 is specifically configured to send a scene detection request to the media service module; the media service module receives the scene detection request and issues the scene detection request to the media policy module; the media policy module receives the scene detection request and simultaneously issues the scene detection request to a bottom layer driver.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the algorithm management module through the media policy module; the processing unit 501 is specifically configured to, at the hardware abstraction layer, receive the scene detection request, obtain an original scene recognition data frame to be processed, invoke an algorithm for implementing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, parse the scene detection request by using the media service module to obtain first information, and send the first information to the media policy module; the media policy module receives the first information and simultaneously issues the first information to a bottom layer driver; the media strategy module receives an original scene identification data frame reported by the bottom layer driver; the media policy module transmits the original scene identification data frame to the algorithm management module; and the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
In one possible example, in the aspect that the algorithm management module invokes the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result, the algorithm management module invokes a preset hardware module to accelerate the preset algorithm to obtain an accelerated preset algorithm, where the preset hardware module includes DSP, NPU, GPU; and the algorithm management module invokes the accelerated preset algorithm to detect the scene recognition data frame to obtain a target scene detection result.
In one possible example, the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and performs photographing according to the target photographing algorithm, where the processing unit 501 is configured to query a preset database, and obtain a target algorithm corresponding to the target scene detection result, where the preset database includes a mapping relationship corresponding to the scene detection result and the algorithm.
In one possible example, a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, wherein the hardware abstraction module is connected with the algorithm management module through the media policy module; in the aspect that the third party application receives the target scene detection result, the processing unit 501 is configured to report the target scene detection result to the media policy module by using the algorithm management module; the media policy module reports the target scene detection result to a hardware abstraction module; the hardware abstraction module reports the target scene detection result to an information callback module; and the third party application receives the target scene detection result reported by the information callback module.
The camera scene recognition means 500 may further comprise a storage unit 503 for storing program code and data of the electronic device. The processing unit 501 may be a processor, the communication unit 502 may be a touch display screen or a transceiver, and the storage unit 503 may be a memory.
It can be understood that, since the method embodiment and the apparatus embodiment are in different presentation forms of the same technical concept, the content of the method embodiment portion in the present application should be adapted to the apparatus embodiment portion synchronously, which is not described herein.
The embodiment of the application also provides a chip, wherein the chip comprises a processor, and the processor is used for calling and running the computer program from the memory, so that the device provided with the chip executes part or all of the steps described in the electronic device in the embodiment of the method.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.

Claims (10)

1. The camera scene recognition method is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a media service module and an operating system, and an application layer of the operating system is provided with a third party application; the method comprises the following steps:
The third party application sends a scene detection request to a hardware abstraction layer of the operating system;
the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module;
and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
2. The method of claim 1, wherein a hardware abstraction layer of the operating system is provided with a media policy module; the third party application sending a scene detection request to a hardware abstraction layer of the operating system, comprising:
the third party application sends a scene detection request to the media service module;
the media service module receives the scene detection request and issues the scene detection request to the media policy module;
The media policy module receives the scene detection request and simultaneously issues the scene detection request to a bottom layer driver.
3. The method according to claim 1 or 2, wherein a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module and an algorithm management module, the hardware abstraction module being connected to the algorithm management module through the media policy module; the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame, and obtains a target scene detection result, and the method comprises the following steps:
the media service module analyzes the scene detection request to obtain first information, and issues the first information to the media policy module;
the media policy module receives the first information and simultaneously issues the first information to a bottom layer driver;
the media strategy module receives an original scene identification data frame reported by the bottom layer driver;
the media policy module transmits the original scene identification data frame to the algorithm management module;
And the algorithm management module calls the scene recognition algorithm to detect the original scene recognition data frame and accelerate to obtain a target scene detection result.
4. The method of claim 3, wherein the algorithm management module invoking the scene recognition algorithm to detect and accelerate the original scene recognition data frame to obtain a target scene detection result, comprising:
the algorithm management module invokes a preset hardware module to accelerate the preset algorithm to obtain an accelerated preset algorithm, wherein the preset hardware module comprises DSP, NPU, GPU;
and the algorithm management module invokes the accelerated preset algorithm to detect the scene recognition data frame to obtain a target scene detection result.
5. The method of claim 1, wherein the third party application receiving the target scene detection result, and obtaining a target photographing algorithm according to the target scene detection result, photographing according to the target photographing algorithm, comprises:
inquiring a preset database to obtain a target algorithm corresponding to the target scene detection result, wherein the preset database comprises a mapping relation corresponding to the scene detection result and the algorithm.
6. The method according to claim 1, wherein a hardware abstraction layer of the operating system is provided with a hardware abstraction module, a media policy module, and an algorithm management module, the hardware abstraction module being connected to the algorithm management module through the media policy module; the third party application receiving the target scene detection result comprises:
the algorithm management module reports the target scene detection result to the media policy module;
the media policy module reports the target scene detection result to a hardware abstraction module;
the hardware abstraction module reports the target scene detection result to an information callback module;
and the third party application receives the target scene detection result reported by the information callback module.
7. The camera scene recognition device is characterized by being applied to electronic equipment, wherein the electronic equipment comprises a media service module and an operating system, and an application layer of the operating system is provided with a third party application; the apparatus comprises a processing unit and a communication unit, wherein,
the processing unit is used for sending a scene detection request to a hardware abstraction layer of the operating system by the third party application; the hardware abstraction layer receives the scene detection request, acquires an original scene recognition data frame to be processed, invokes an algorithm for realizing scene recognition to process the original scene recognition data frame to obtain a target scene detection result, and sends the target scene detection result to the third party application, wherein the algorithm for realizing scene recognition is that the third party application requests an operating system to be opened for the third party application in advance through the media service module; and the third party application receives the target scene detection result, obtains a target photographing algorithm according to the target scene detection result, and photographs according to the target photographing algorithm.
8. A chip, comprising: a processor for calling and running a computer program from a memory, causing a device on which the chip is mounted to perform the method of any of claims 1-6.
9. An electronic device comprising a processor, a memory, a communication interface, and one or more programs stored in the memory and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
CN201911252533.8A 2019-12-09 2019-12-09 Camera scene recognition method and related device Active CN110991368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911252533.8A CN110991368B (en) 2019-12-09 2019-12-09 Camera scene recognition method and related device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911252533.8A CN110991368B (en) 2019-12-09 2019-12-09 Camera scene recognition method and related device

Publications (2)

Publication Number Publication Date
CN110991368A CN110991368A (en) 2020-04-10
CN110991368B true CN110991368B (en) 2023-06-02

Family

ID=70091486

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911252533.8A Active CN110991368B (en) 2019-12-09 2019-12-09 Camera scene recognition method and related device

Country Status (1)

Country Link
CN (1) CN110991368B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110941821A (en) * 2019-12-09 2020-03-31 Oppo广东移动通信有限公司 Data processing method, device and storage medium
CN111061524A (en) * 2019-12-09 2020-04-24 Oppo广东移动通信有限公司 Application data processing method and related device
CN111491102B (en) * 2020-04-22 2022-01-07 Oppo广东移动通信有限公司 Detection method and system for photographing scene, mobile terminal and storage medium
CN112162797B (en) * 2020-10-14 2022-01-25 珠海格力电器股份有限公司 Data processing method, system, storage medium and electronic device
CN114727004A (en) * 2021-01-05 2022-07-08 北京小米移动软件有限公司 Image acquisition method and device, electronic equipment and storage medium
CN116347009B (en) * 2023-02-24 2023-12-15 荣耀终端有限公司 Video generation method and electronic equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483725A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Resource allocation method and Related product
CN109831629A (en) * 2019-03-14 2019-05-31 Oppo广东移动通信有限公司 Method of adjustment, device, terminal and the storage medium of terminal photographing mode

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11182210B2 (en) * 2017-07-31 2021-11-23 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for resource allocation and terminal device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107483725A (en) * 2017-07-31 2017-12-15 广东欧珀移动通信有限公司 Resource allocation method and Related product
CN109831629A (en) * 2019-03-14 2019-05-31 Oppo广东移动通信有限公司 Method of adjustment, device, terminal and the storage medium of terminal photographing mode

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱玉鹏 ; 白海亮 ; 王宏强 ; .战场数据融合仿真系统设计与实现.火力与指挥控制.2010,(01),全文. *

Also Published As

Publication number Publication date
CN110991368A (en) 2020-04-10

Similar Documents

Publication Publication Date Title
CN110991368B (en) Camera scene recognition method and related device
CN111724775B (en) Voice interaction method and electronic equipment
CN110995994B (en) Image shooting method and related device
WO2021115038A1 (en) Application data processing method and related apparatus
US20230069398A1 (en) Method for Implementing Wi-Fi Peer-To-Peer Service and Related Device
CN106657528A (en) Incoming call management method and device
US20220159453A1 (en) Method for Using Remote SIM Module and Electronic Device
CN116360725B (en) Display interaction system, display method and device
CN106992953A (en) System information acquisition method and device
US20230122238A1 (en) Account binding method, device, and system
US20240069850A1 (en) Application Sharing Method, Electronic Device, and Storage Medium
CN110413383B (en) Event processing method, device, terminal and storage medium
CN115309547B (en) Method and device for processing asynchronous binder call
WO2023005711A1 (en) Service recommendation method and electronic device
CN111666075A (en) Multi-device interaction method and system
US20220311700A1 (en) Method for multiplexing http channels and terminal
CN115086888B (en) Message notification method and device and electronic equipment
CN117544717A (en) Risk identification method and electronic equipment
CN116305093A (en) Method for operating applet and electronic device
CN116414500A (en) Recording method, acquisition method and terminal equipment for operation guide information of electronic equipment
CN110996089B (en) Volume calculation method and related device
CN116991302B (en) Application and gesture navigation bar compatible operation method, graphical interface and related device
CN115016666B (en) Touch processing method, terminal equipment and storage medium
CN116056176B (en) APN switching method and related equipment
CN116703689B (en) Method and device for generating shader program and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant