CN116708886A - Video processing method, device and storage medium - Google Patents

Video processing method, device and storage medium Download PDF

Info

Publication number
CN116708886A
CN116708886A CN202211467519.1A CN202211467519A CN116708886A CN 116708886 A CN116708886 A CN 116708886A CN 202211467519 A CN202211467519 A CN 202211467519A CN 116708886 A CN116708886 A CN 116708886A
Authority
CN
China
Prior art keywords
video
service
application
target application
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211467519.1A
Other languages
Chinese (zh)
Other versions
CN116708886B (en
Inventor
谷代平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202211467519.1A priority Critical patent/CN116708886B/en
Publication of CN116708886A publication Critical patent/CN116708886A/en
Application granted granted Critical
Publication of CN116708886B publication Critical patent/CN116708886B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/414Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
    • H04N21/41407Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance embedded in a portable device, e.g. video client on a mobile phone, PDA, laptop
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4436Power management, e.g. shutting down unused components of the receiver
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/443OS processes, e.g. booting an STB, implementing a Java virtual machine in an STB or power management in an STB
    • H04N21/4438Window management, e.g. event handling following interaction with the user interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4627Rights management associated to the content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a video processing method, video processing equipment and a storage medium, and relates to the technical field of terminals. The method comprises the following steps: when the electronic equipment detects the operation of playing the video in the target application, the permission check of whether the target application has permission to use the first video service is executed, wherein the first video service is a service provided by the electronic equipment and used for optimizing video image quality. If the electronic equipment determines that the target application has permission to use the first video service, various capabilities of the first video service are called, and the image quality of the video in the target application is optimized, so that the target application fully utilizes the display engine capability of the equipment, and the experience of watching the video by a user is improved.

Description

Video processing method, device and storage medium
Technical Field
The present application relates to the field of terminal technologies, and in particular, to a video processing method, apparatus, and storage medium.
Background
The High-Dynamic Range (HDR) video can provide more Dynamic Range and image details than the conventional video (Standard Dynamic Range, SDR), and bring more abundant colors and more vivid and natural detail expression for the video picture, so that the video picture is more similar to the video picture seen by human eyes. In order to improve the experience of watching video, most electronic devices such as mobile phones and tablet computers support playing of HDR video.
However, when the user plays the video by using the third party application, the device temporarily does not support the third party application to adjust the relevant parameters of the video according to the actual display effect of the video played by the device, so that the use experience of the user is poor.
Disclosure of Invention
The embodiment of the application provides a video processing method, equipment and a storage medium, which enable an application to use the display engine capability of the equipment to optimize video image quality.
In a first aspect, an embodiment of the present application provides a video processing method, applied to an electronic device, where the electronic device detects an operation of playing a video in a target application, and verifies whether the target application has permission to use a first video service, where the first video service is a service provided by the electronic device for optimizing a video playing effect; and if the target application has permission to use the first video service, the electronic equipment invokes various capabilities of the first video service to optimize the image quality of the video in the target application.
In this embodiment, the first video service may correspond to the video enhancement service shown in fig. 7, and the video enhancement service includes, for example, a high dynamic range HDR video enhancement service. The target application may be a video-playing-enabled application, such as a video (including short video) class application, a game class application, a social class application, a shopping class application, and the like.
According to the scheme, when the user triggers to play the video in the target application, the electronic device determines whether to call the display engine capability of the device to optimize the video image quality of the target application by verifying whether the target application has permission to use the display engine capability of the device, so that the problem that the display engine capability of the device cannot be fully utilized by the current third-party application is solved, and the experience of watching the video by the user is improved.
In an optional embodiment of the first aspect, detecting an operation to play the target video includes: detecting a first operation of starting a target application; or, a first operation of opening the target application is detected, and a second operation of opening the target video in the target application is detected.
The above scheme shows two specific application scenarios:
in a scene, a user clicks to open a target application, a display interface of the target application provides a plurality of video resources, the user can click to watch any video resource in the display interface, and when the user clicks any video resource, a triggering device verifies whether the target application has permission to use a first video service.
In another scenario, the user clicks to open the target application, and the target application directly plays a certain video resource, such as a short video, and at this time, the trigger device verifies whether the target application has permission to use the first video service.
In an optional embodiment of the first aspect, the electronic device verifying whether the target application has permission to use the first video service comprises: the electronic equipment sends a first request to the cloud server, wherein the first request is used for requesting to verify whether the target application has permission to use the first video service; the electronic device receives a first response from the cloud server, wherein the first response is used for indicating that the target application has permission to use the first video service or indicating that the target application does not have permission to use the first video service.
In this embodiment, the cloud server may be a cloud server of the electronic device, where the cloud server may be configured to allocate, for the target application, a permission identifier for using the display engine capability of the electronic device, where the cloud server is further configured to check whether a certain target application has permission to use the display engine capability of the electronic device, as described in the following second identifier. The display engine capabilities include capabilities of the first video service such as HDR video enhancement, HDR Vivid, and the like.
According to the scheme, the electronic device verifies whether the target application has permission to use the first video service of the current device through the cloud server, so that the image quality of video playing of the application is improved.
In an alternative embodiment of the first aspect, the first request comprises at least one of a first identification of the target application, a second identification of the target application; the first identifier is used for indicating the name of the target application, and the second identifier is an identifier distributed for the target application by the cloud server.
In this embodiment, the name of the target application may also be referred to as the package name of the target application.
The cloud server can determine whether the target application has permission to use the first video service of the device through local query (corresponding relation between the first identifier and the second identifier is stored locally), if the allocation record of the target application is stored locally, if the corresponding relation between the first identifier and the second identifier of the target application is recorded, the cloud server determines that the target application has permission to use the first video service of the device.
In an alternative embodiment of the first aspect, before the electronic device invokes the capabilities of the first video service, the method further comprises: the electronic device creates a first video service; the electronic device initializes a first video service.
In this embodiment, verifying the first video service, that is, verifying whether the target application has permission to use the first video service, occurs in the process of creating the first video service, and specifically, reference may be made to fig. 9.
The above scheme shows that the device needs to complete creation, initialization and authentication of the first video service before the electronic device invokes the capabilities of the first video service in order for the application to be able to use the display engine capabilities of the device.
In an optional embodiment of the first aspect, before the electronic device verifies whether the target application has permission to use the first video service, the method further comprises: the electronic device creates and initializes a second video service, which is an underlying service that creates the first video service.
In this embodiment, the second video service may correspond to the video base service shown in fig. 7, and the second video service may be used to manage various capabilities of the first video service.
The creation and initialization of the second video service shown in the above scheme are the precondition that the application uses the first video service, and each capability of the first video service can be used only after the creation and initialization of the second video service are completed.
In an optional embodiment of the first aspect, further comprising: and detecting a third operation of closing the video in the target application, and logging off the first video service by the electronic equipment.
The scheme shows the triggering condition of logging off the first video service so as to release the memory space of the equipment and save the equipment resources.
In an optional embodiment of the first aspect, further comprising: and detecting a fourth operation of closing the target application, and logging out the second video service by the electronic equipment.
The scheme shows the triggering condition of logging off the second video service so as to release the memory space of the equipment and save the equipment resources.
In an alternative embodiment of the first aspect, the capabilities of the first video service include at least one of: brightness adjustment capability for high dynamic range HDR video; the ability to adjust the color of HDR video; the brightness adjusting capability of the layer corresponding to the HDR video; the ability to adjust the color of the layer to which the HDR video corresponds.
In a second aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor for invoking a computer program in the memory to perform the method according to any of the first aspects.
In a third aspect, embodiments of the present application provide a computer-readable storage medium storing computer instructions that, when run on an electronic device, cause the electronic device to perform the method of any one of the first aspects.
In a fourth aspect, an embodiment of the present application provides a chip comprising a processor for invoking a computer program in memory to perform a method according to any of the first aspects.
In a fifth aspect, a computer program product comprising a computer program which, when run, causes a computer to perform the method according to any of the first aspects.
It should be understood that the second to fifth aspects of the present application correspond to the technical solutions of the first aspect of the present application, and the advantages obtained by each aspect and the corresponding possible embodiments are similar, and are not repeated.
Drawings
Fig. 1 is a schematic view of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic view of a scenario of a video processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a software structural block diagram of an electronic device according to an embodiment of the present application;
FIG. 5 is a schematic flow chart of the application device for displaying engine capability according to an embodiment of the present application;
FIG. 6 is a schematic flow chart of an application permission check provided in an embodiment of the present application;
Fig. 7 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 8 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 9 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 10 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 11 is a schematic flow chart of a video processing method according to an embodiment of the present application;
fig. 12 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a chip according to an embodiment of the present application.
Detailed Description
In order to clearly describe the technical solution of the embodiments of the present application, in the embodiments of the present application, the words "first", "second", etc. are used to distinguish the same item or similar items having substantially the same function and effect. For example, the first operation, the second operation, the third operation, and the fourth operation are merely for distinguishing between the different operations. As another example, the first video service and the second video service are merely to distinguish between different video services. As another example, the first identifier and the second identifier are merely intended to distinguish between the different identifiers. It will be appreciated by those of skill in the art that the words "first," "second," and the like do not limit the amount and order of execution, and that the words "first," "second," and the like do not necessarily differ.
It should be noted that, in the embodiments of the present application, words such as "exemplary" or "such as" are used to mean serving as an example, instance, or illustration. Any embodiment or design described herein as "exemplary" or "for example" should not be construed as preferred or advantageous over other embodiments or designs. Rather, the use of words such as "exemplary" or "such as" is intended to present related concepts in a concrete fashion.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or" describes an association relationship of associated objects, meaning that there may be three relationships, e.g., a and/or B, may mean that a exists alone, a and B exist together, and B exists alone, where a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (a, b or c) of a, b, c, a-b, a-c, b-c or a-b-c may be represented, wherein a, b, c may be single or plural.
As mentioned above, the hardware capabilities of current device manufacturers, such as the display capabilities of HDR video, cannot be fully utilized by the application manufacturer, which will affect the user's experience of viewing the video. For example, when a user views HDR video using a third party application, video parameter adjustments cannot be made according to the actual display effects of the hardware device.
In order to fully utilize the hardware capability of the device and improve the experience of a user for watching an HDR video, the embodiment of the application provides a video processing method, which mainly comprises the following steps: the cloud server of the equipment manufacturer opens the display capability of the HDR video for the third party application, so that the third party application can use the display engine capability of the equipment bottom layer to adjust relevant parameters of the video when the user plays the HDR video in the application, and the video playing effect is improved. Based on the third party application developer rights application, the cloud server may assign a corresponding identification to the third party application that indicates the third party application is able to use the display capabilities of the device's HDR video. When a user clicks to play a certain HDR video in a third party application, the third party application can be triggered to create a video service software development kit (software development kit, SDK), and interact with services related to the video of the device application framework layer through the video service SDK, including interaction processes such as creating a video basic service, initializing the video basic service, authenticating a video enhancement service, registering and using the video enhancement service, and the like, so that the video playing effect is improved by utilizing the display engine capability of the device bottom layer. The scheme solves the problem that the third party application manufacturer cannot fully utilize the pain point of the screen display capability of the equipment.
The video processing method provided by the embodiment of the application can be applied to the electronic equipment with the display function. The electronic device may also be referred to as a terminal (terminal), a User Equipment (UE), a Mobile Station (MS), a Mobile Terminal (MT), etc. The electronic device may be a mobile phone, a folding screen phone, a smart television, a wearable device, a tablet (Pad), a computer with wireless transceiving functionality, a Virtual Reality (VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in an industrial control (industrial control), a wireless terminal in an unmanned (self-driving), a wireless terminal in a teleoperation (remote medical surgery), a wireless terminal in a smart grid (smart grid), a wireless terminal in a transportation security (transportation safety), a wireless terminal in a smart city (smart city), a wireless terminal in a smart home (smart home), or the like. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the electronic equipment.
Before describing the technical scheme of the present application, some terms related to the embodiments of the present application are explained first for understanding by those skilled in the art.
HDR Vivid (Cyanine color HDR), a high dynamic range video technology standard, compared with the traditional technology, the HDR Vivid enables users to see more realistic pictures, richer color rendition, deeper dark detail and more subtle bright processing in images.
The text view can be used to directly project a content stream into a view (view), and can be used to implement functions such as real-time preview, support moving, rotating, and scaling animations, and support screenshot, but must be used in a hardware-accelerated window, occupying a higher memory than the Surface view.
The Surface view uses a double buffer mechanism (a user interface UI thread and a rendering thread), and when playing video, the picture is more flow, and the graphic drawing can be carried out in an independent thread without affecting the main thread.
Color gamut, which describes the range of colors used or specified by different display devices, or different standards. Gamut mapping (tone mapping), a mapping built between two different gamuts. p3 color gamut, a display standard.
The following description will take an electronic device as an example of a mobile phone, and this example does not limit the embodiments of the present application.
Fig. 1 is a schematic view of a video processing method according to an embodiment of the present application. Referring to fig. 1, a user clicks an icon of a third party video class application a on a mobile phone main interface shown in fig. 1 a, and enters a display interface of the application a shown in fig. 1 b, where the user can browse various videos including, for example, videos recommended to the user by the application a, video watching records in the application a by the user, and the like. When the user clicks the video 3 in the display interface of the application a, the video processing scheme provided by the embodiment of the application is triggered to be executed, including the creation, initialization, authentication, registration, use and the like of the video service. If the application a has the right to use the mobile phone display engine capability, the image quality of the video 3 shown in fig. 1 c will be significantly improved after the application a uses the mobile phone display engine capability.
Fig. 2 is a schematic view of a scenario of a video processing method according to an embodiment of the present application. Referring to fig. 2, the user clicks an icon of the third party short video class application B on the main interface of the mobile phone shown in fig. 2 a, and enters a short video playing interface shown in fig. 2B, and the user views the first short video a recommended by the application B on the playing interface. In this embodiment, when the user clicks the icon of the application B, the video processing scheme provided by the embodiment of the present application is triggered to be executed, including creation, initialization, authentication, registration, use, and the like of the video service. If the application B has the right to use the mobile phone display engine capability, the image quality of the short video a shown in fig. 2B will be significantly improved after the application B uses the mobile phone display engine capability.
It should be noted that, the applications in the above two examples are both third party video applications, and the type of the third party application in the embodiment of the present application is not limited, as long as the third party application can provide a video playing function. In some embodiments, the third party application may also be, for example, a social application, a game application, a shopping application, etc., and the user may trigger the video processing scheme provided by the embodiments of the present application to be executed when a user clicks to play a certain video in the third party application.
In order to better understand the embodiments of the present application, the structure of the electronic device according to the embodiments of the present application is described below. Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 3, the electronic device 300 may include: processor 310, external memory interface 320, internal memory 321, antenna 1, antenna 2, mobile communication module 350, wireless communication module 360, sensor 380, camera 393, and display screen 394, etc.
It is to be understood that the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 300. In other embodiments, electronic device 300 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 310 may include one or more processing units, such as: the processor 310 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, a display processing unit (display process unit, DPU), and/or a neural-network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
It should be understood that the interfacing relationship between the modules illustrated in the embodiments of the present invention is only illustrative, and is not meant to limit the structure of the electronic device 300. In other embodiments, the electronic device 300 may also employ different interfaces, or a combination of interfaces.
The external memory interface 320 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 300. The external memory card communicates with the processor 310 through an external memory interface 320 to implement data storage functions. For example, a data file such as video is stored in an external memory card.
The internal memory 321 may be used to store one or more computer programs, including instructions. The processor 310 may cause the electronic device 300 to execute various functional applications, data processing, and the like by executing the above-described instructions stored in the internal memory 321. The internal memory 321 may include a storage program area and a storage data area. The storage program area can store an operating system; the storage area may also store one or more application programs (such as video service applications) and the like. The storage data area may store data created during use of the electronic device 300 (e.g., video data), etc.
The wireless communication function of the electronic device 300 may be implemented by the antenna 1, the antenna 2, the mobile communication module 350, the wireless communication module 360, a modem processor, a baseband processor, and the like. The mobile communication module 350 provides a solution for wireless communication including 2G/3G/4G/5G, etc., applied on the electronic device 300. The wireless communication module 360 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN), bluetooth, global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared (IR), etc. applied to the electronic device 300. In the embodiment of the present application, the electronic device 300 may be communicatively connected to the cloud server through the mobile communication module 350 or the wireless communication module 360, so as to verify whether the third party application of the electronic device 300 has permission to use the display engine capability of the electronic device 300.
The electronic device 300 may implement display functions through a GPU, a display screen 394, an application processor, and the like. The GPU is a microprocessor for image processing, connected to the display screen 394 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 310 may include one or more GPUs that execute instructions to generate or change display information.
The display screen 394 is used for displaying images, videos, and the like. The display screen 394 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, display screen 394 is a foldable flexible screen, display screen 394 may be referred to as a folding screen, the folding screen including a first screen and a second screen, the folding screen being expandable or foldable along a folding edge to form the first screen and the second screen.
Electronic device 300 may implement capture functionality through an ISP, one or more cameras 393, video codecs, a GPU, one or more display screens 394, an application processor, and the like. Video codecs are used to compress or decompress digital video. The electronic device 300 may support one or more video codecs. Thus, the electronic device 300 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The sensors 380 may include pressure sensors 380A, touch sensors 380K, ambient light sensors 380L, and the like.
The pressure sensor 380A is configured to sense a pressure signal and convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 380A may be disposed on the display screen 394.
Touch sensor 380K, also known as a "touch device". The touch sensor 380K may be disposed on the display screen 394, and the touch sensor 380K and the display screen 394 form a touch screen, which is also referred to as a "touch screen". The touch sensor 380K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display screen 394. In other embodiments, touch sensor 380K may also be located on a surface of electronic device 300 other than at display 394.
The ambient light sensor 380L is used to sense ambient light level. The electronic device 300 may adaptively adjust the brightness of the display screen 394 based on the perceived ambient light level. The ambient light sensor 380L may also be used to automatically adjust white balance at the time of photographing.
In addition, an operating system is run on the components. Such as iOS operating systems, android open source operating systems, windows operating systems, etc. Running applications, such as video-type applications, may be installed on the operating system.
The operating system of the electronic device 300 may employ a layered architecture, an event driven architecture, a microkernel architecture, a microservice architecture, or a cloud architecture. In the embodiment of the application, a software system with a layered architecture is taken as an Android system as an example, and the software structure of the electronic equipment is illustrated by an example.
Fig. 4 is a software structure block diagram of an electronic device according to an embodiment of the present application. The layered architecture divides the software system of the electronic device into several layers, each of which has a distinct role and division of labor. The layers communicate with each other through a software interface. In some embodiments, the Android system may be divided into four layers, an application layer (applications), an application framework layer (application framework), an Zhuoyun rows (Android run) and a system library, and a kernel layer (kernel), respectively.
It should be understood that only the functional modules related to the method for video processing according to the embodiment of the present application are shown in fig. 4, and in practice, more or fewer functional modules may be included in the electronic device, or some of the functional modules may be replaced by other functional modules.
The application layer may include a series of application packages. As shown in fig. 4, an application a, an application B, and a video service application (SDK) are included in the application layer. Application a and application B have video playback capabilities, e.g., video class (including short video) third party applications, capable of providing video resources to users. The video service application has the functions of brightness information management, color information management, video layer management, client exception handling and the like, wherein the brightness information management can be used for adjusting the brightness of a video in an application A or an application B, the color information management can be used for adjusting the color display of the video in the application A or the application B, the video layer management can be used for managing and adjusting relevant parameters (such as brightness, color and the like) of a corresponding layer of the video, the client exception handling can be used for informing the application when the equipment display engine service is abnormal, and the client exception handling can also be used for monitoring whether the media authentication service is normal.
In some embodiments, the application layer may also include other applications, such as cameras, gallery, calendar, talk, bluetooth, music, short message, etc. applications.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions. As shown in fig. 4, the application Framework layer may include a media authentication service, a cloud authentication service, and an application Framework (Framework).
The media authentication service comprises a video service, and the video service has the functions of applying concurrency strategies and application authentication. Applying concurrency policies includes: a policy is coordinated and scheduled in a device that enables the device to display engine capabilities. For example, in fig. 4, when application a plays a certain video, the user switches to application B watching a short video, at which time application B can implement the device display engine capability enabled by the video service. The application authentication comprises the following steps: the video service receives a video service permission verification request from an application of the application program layer, and the video service can acquire whether the application has permission to use the display engine capability of the device by inquiring a history record of permission verification or by interacting with the cloud authentication service. In some embodiments, the media authentication service may also include an audio service, including, for example, a otoback service, and the like.
The cloud authentication service may be configured to initiate a permission verification request to a cloud server to learn whether an application has permission to use a display engine capability of the device.
The application framework comprises a display engine service (displayEngine), and the display engine service has the functions of brightness information management, color information management, layer information management and the like.
In some embodiments, the application framework layer may also include a power management service (power manager service, PMS), an activity manager, a window manager, a content provider, a view system, a telephony manager, a resource manager, a notification manager, etc., to which embodiments of the present application are not limited in any way.
Android Runtime (Android run) includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system. The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android. The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional 3D graphics processing Libraries (e.g., openGL ES), two-dimensional 2D graphics engines (e.g., SGL), etc. The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications. Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio and video encoding formats. The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like. The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer includes, for example, display driver, audio driver, camera driver, sensor driver, etc., which is not limited in any way by the embodiments of the present application.
Although the Android system is taken as an example for explanation, the basic principle of the embodiment of the application is also applicable to electronic devices of operating systems such as iOS or Windows.
The following describes the technical scheme of the present application and how the technical scheme of the present application solves the above technical problems in detail with specific embodiments. The following embodiments may be implemented independently or combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
In order to realize that the third party application can use the display engine capability of the device bottom layer, before the third party application is updated and online, a developer of the third party application can access a cloud server of a device manufacturer through the electronic device to apply for permission to use the display engine capability of the device.
Fig. 5 is a schematic flow chart of applying for the device display engine capability according to an embodiment of the present application, as shown in fig. 5, a developer logs in a cloud server through an electronic device, and sends a rights application request to the cloud server, for example, the rights application request is used for requesting to apply for using rights of video services (i.e. displaying the engine capability) for application a. The cloud server returns a permission application response to the electronic device, wherein the permission application response carries an identifier distributed by the cloud server for the application A, and the identifier can be used for indicating the application A to use the display engine capability of the device.
After the developer applies for the permission of using the device display engine capability for the third party application, when the user plays the video by using the third party application, the user can perform permission verification through the flow shown in fig. 6, and after the verification is successful, the user uses the device display engine capability to realize the improvement of the quality of the currently played video.
Fig. 6 is a schematic flow chart of application permission verification according to an embodiment of the present application. As shown in fig. 6, the application a has known an application identifier (app id) allocated to the application a by the cloud, and when the user plays the video provided by the application a, the application a sends a permission check request to the media authentication service, where the permission check request carries the app id. The media authentication server sends a permission verification request to the cloud authentication service, and the cloud authentication service can initiate permission verification to the cloud server after acquiring the app id of the application A. After verification by the cloud server, the cloud authentication service receives a verification result returned by the cloud server, and the cloud authentication server sends the verification result to the media authentication service to finish authentication of the application A. After the rights verification is successful, application a may use the device display engine capabilities to optimize the quality of the currently playing video. In addition, after the rights verification is completed, the media authentication service may store the related information of the application a rights verification for subsequent queries.
It should be noted that after the electronic device is started, after the user opens the application a and clicks to play a certain video, the application permission checking flow may be triggered. In other scenes, after the user opens the application a and clicks to play a certain video, the application a is switched to other applications, and at this time, the application a runs in the background (is not closed), and when the user switches back to the application a again, the user does not need to execute the above application permission verification process when clicking to play another video. In other scenarios, after the application a plays a certain video, the user closes the video and clicks to play another video, and the above application permission verification process is not required to be executed. Unless the application a is closed by the user, the above-mentioned application permission checking procedure needs to be executed again when the application a is restarted and a certain video is clicked to play. In this way, device power consumption may be reduced.
In some embodiments, after the user opens the application a, the above-mentioned application permission checking process may be triggered, that is, the application permission checking process is not required to be executed when the user clicks to play a certain video. In this embodiment, the application a may correspond to, for example, a short video application, and directly play a short video when the short video application is opened.
Based on the above embodiment, after the electronic device is powered on, the user first opens the application a, or first clicks the video in the application a, and the above application permission verification process needs to be executed, where the process is a precondition that the application a uses the device display engine capability. In addition to the above-described application rights verification process, other processes such as creation, initialization, registration, etc. of a video service are included. The overall flow of the video processing scheme is described in detail below in conjunction with fig. 7-12.
Fig. 7 is a schematic flow chart of a video processing method according to an embodiment of the present application. As shown in fig. 7, the video processing method of the present embodiment includes the steps of:
s701, the application creates a video basic service.
The application (fig. 7 illustrates an application a as an example) may be a third party application having a video playing function, such as a video (including short video) class, a social class, a game class, a shopping class, and the like, which is not limited in this embodiment.
In fig. 7, after the electronic device is powered on, a user clicks on a target video in the application a for the first time, or opens the application a for the first time, the application a may create a video basic service through the video service SDK.
The target video may be a video recommended by the application a for the user, or a video in a user history play record, or the like.
The video service SDK may correspond to the video service application shown in fig. 4. It should be understood that, the applications that have acquired the rights of the cloud server may perform information interaction with the media authentication service of the application framework layer through the video service SDK, the cloud authentication service, the display engine service, and the like.
The video base service is one of the video services that can be used to manage various capabilities of the video enhancement service. In addition, video services include video enhancement services that include a variety of capabilities such as HDR video enhancement, HDR Vivid, etc., including brightness listening and adjusting, color adjusting, etc. Video enhancement services are created by video base services, which are functionally independent. It should be appreciated that as video technology advances, the capabilities of video enhancement services may be continually expanded.
Application creation of a video base service includes creation of a video base service client (see the embodiment shown in fig. 8), which may be regarded as an instance object of the video base service. The video basic service client sets a state callback monitor, and sends the state callback monitor to the video service management module for managing the monitor, wherein the monitor is used for notifying the state to the application, such as notifying that the video basic service is successfully created, notifying that the initialization of the video basic service is successful, and the like.
S702, the application initializes the video basic service by interacting with the media authentication service.
After the video basic service is created, the application will initialize the video basic service to know which video enhancement services are supported by the current device, such as whether to support the HDR Vivid capability, whether to support the HDR video enhancement capability, and so on. The application, upon learning the capabilities of the video enhancement services supported by the current device, can create (register) the relevant capabilities of the video enhancement services.
S703, the application interacts with the media authentication service, and the authentication flow of the video enhancement service is completed.
After knowing which capabilities the video enhancement service supported by the current device has, the application can create the related capabilities of the video enhancement service, and before creating the related capabilities of the video enhancement service, the application needs to complete an authentication process of the video enhancement service, i.e. checking whether the application has permission to use the video enhancement service of the device. The authentication process involves interaction between the cloud authentication service and the cloud server of the device, and reference may be made to the embodiment of fig. 6. After the cloud authentication service determines that the application has the right to use the video enhancement service of the device, the application may be notified to execute S704 through the media authentication service.
S704, the application registers and uses various capabilities of the video enhancement service by interacting with the media authentication service, the display engine service. Among other capabilities of video enhancement services include, e.g., HDR video enhancement, HDR Vivid, etc., including brightness listening and adjustment, color adjustment, etc.
Referring to fig. 6, the media authentication service may store related information of application rights verification, where the related information includes app id allocated to an application by the cloud service, and interface information of various capabilities of a video enhancement service that the application can use. The media authentication service may be used to manage all interface information.
The application can call the interface of each capability of the video enhancement service through the media authentication service, and initiate the request for registering and using each capability of the video enhancement service to the display engine service so as to realize the purpose that the application uses the display engine capability of the device to adjust the video image quality.
S705. the application logs off the capabilities of the video enhancement service by interacting with the media authentication service, the display engine service.
In fig. 7, when a user closes a target video in application a, application a may log out of the capabilities of the video enhancement service by interacting with the media authentication service, the display engine service.
Similar to S704, the application may invoke an interface of the capabilities of the video enhancement service through the media authentication service, initiating a request to the display engine service to log off the capabilities of the video enhancement service to stop using the display engine capabilities of the device.
S706, the application logs off the video enhancement service.
S707, the application logs off the video basic service.
In fig. 7, when the user closes the application a, the application a in turn logs off the video enhancement service, the video base service.
The video processing method shown in this embodiment describes a processing procedure of how an application uses the capability of a device display engine when a user clicks and plays any video in a certain third-party application for the first time after the device is powered on, or when the user clicks and opens a short video application for the first time, where the procedure involves creation and initialization of a video basic service, authentication, registration and use of a video enhancement service. Meanwhile, various capabilities of the application requiring to log out of the video enhancement service after the user closes the video in the application are also described, and the video enhancement service and the video base service are sequentially logged out after the user closes the application. The scheme enables the application to fully use the display engine capability of the device so as to improve the display effect of the application video picture and bring better video watching experience to the user.
Fig. 8 is a flowchart of a video processing method according to an embodiment of the present application. The present embodiment relates to S701 and S702 in fig. 7, and as shown in fig. 8, S701 creates a video basic service, which may include the steps of:
s7011, the application sends an indication to the video foundation service client, and the video foundation service is created.
S7012, the video basic service client sends an indication to the video service management module, and the indication sets a state callback monitor.
S7013, a video service management module manages the monitor.
S7014, the video service management module sends a notification to the video basic service client through the monitor to notify that the video basic service is successfully created.
S7015, the video basic service client sends a notification to the application, and the video basic service client notifies that the video basic service is successfully created.
The application comprises a video service SDK, the video service SDK comprises a video basic service client and a video service management module, and the basic service client and the video service management module are both positioned in an application program layer. In this embodiment, the application completes the creation of the video base service through the video base server client and the video service management module.
The listener can be seen as a sub-module in the video service management module, which in this embodiment is used to inform the application whether the creation of the video base service was successful. The video service management module participates in creating, initializing and authenticating video enhancement services in addition to the video base service, the management listener.
As shown in fig. 8, S702 initializes a video basic service, which may include the steps of:
s7021, the application sends an indication to the video basic service client, and the indication initializes the video basic service.
S7022, the video basic service client performs initialization video basic service, inquires about related capabilities and provides interface information of the related capabilities.
S7023, the video service management module binds (bind) the video basic service by interacting with the video basic service in the media authentication service.
S7024a, the video service management module informs the video basic service client of successful binding of the video basic service.
S7024b. the video base service client notifies the application that the video base service binding was successful.
If the video basic service is successfully bound, the following steps can be further executed:
s7025a, the video basic service sends a notification to the video service management module, and the notification is successful in binding the video basic service.
And notifying the successful binding of the video basic service, namely notifying the video basic service that the initialization is successful.
S7025b, the video service management module sends a notification to the application through the monitor to inform that the video basic service is successfully initialized.
After the video basic service is established, the application initializes the video basic service client, and the video basic service client asynchronously initializes the video basic service through the video service management module. Asynchronous here can be understood as: the video service management module performs S7024a after performing S7023, instead of waiting for the video base service to return a service binding success thereto before performing S7024a. S7024a and S7025b are two relatively independent steps, that is, the video service management module returns whether the bound video base service and the video base service are successfully initialized and not associated.
After the video basic service is successfully initialized, the video basic service in the media authentication service notifies the video service management module, and the video service management module notifies the monitor of successful initialization and notifies the application of successful initialization of the video basic service through the monitor. To this end, the application may use the capabilities of the video base service client, e.g., whether the current device supports HDR Vivid capabilities, whether the HDR video enhancement capabilities are supported, etc.
In this embodiment, the application interacts with the media authentication service of the application framework layer through the video base server client and the video service management module, so as to complete the initialization of the video base service.
The video processing method shown in this embodiment relates to a process of creating and initializing a video basic service, which is a precondition that an application uses a device video enhancement service, and each capability of the video enhancement service can be used only after the creation and initialization of the video basic service are completed.
Fig. 9 is a flowchart of a video processing method according to an embodiment of the present application. The present embodiment relates to S703 in fig. 7, and as shown in fig. 9, the authentication procedure of the video enhancement service in S703 may include the following steps:
S7031, the application inquires whether the video basic service client supports the video enhancement service.
If the video base service client determines that the video enhancement service is supported (which may be determined by whether there is interface information of the relevant capability, if the interface information of the video enhancement service is queried, it is determined that the video enhancement service is supported), S7032 is performed.
S7032, the application sends an indication to the video base service client, and the video enhancement service is created.
S7033, the video basic service client instantiates a video enhancement service client. Instantiating a video enhancement service client can be understood as creating a video enhancement service client. The video enhancement service client is located at the application layer.
S7034, the video enhancement service client sends an indication to the video service management module, and the video enhancement service is indicated to be created.
The process of creating the video enhancement service includes initializing and authenticating the video enhancement service.
S7035, the video service management module binds the video enhancement service by interacting with the video enhancement service in the media authentication service.
S7036a, the video service management module informs the video enhancement service client of successful video enhancement service binding.
S7036b, the video enhancement service client informs the video base service client that the video enhancement service binding is successful.
S7036c, the video basic service client informs the application of successful video enhancement service binding.
After S7035, the method further includes the steps of:
s7037a, the video enhancement service sends a permission verification request to a cloud authentication service (authentication SDK). The rights verification request is for requesting verification of whether the application has rights to use the video enhancement service of the device.
S7037b, the cloud authentication service returns a permission verification result to the video enhancement service. The rights verification result is used to indicate whether the application has or has not rights to use the video enhancement service.
It should be noted that, the cloud authentication service needs to obtain the permission verification result through interaction with the cloud server of the device, and refer to the foregoing embodiment of fig. 6.
S7038, checking a video enhancement service management authority.
If the permission check result indicates that the application has permission to use the video enhancement service, the following steps can be executed:
s7039a, the video enhancement service sends a notification to the video service management module, and the notification is successful in binding the video enhancement service.
S7039b, the video service management module sends a notification to the application through the monitor to notify that the video enhancement service is successfully created.
In this embodiment, the video enhancement service may be an HDR video enhancement service (capability). When the HDR video enhancement capability is required to be used, the application needs to firstly inquire whether the current device supports the capability, if not, all subsequent calls are stopped, and if the HDR video enhancement capability is supported, the application can create a video enhancement service through a video base service client. The video base service client firstly instantiates an HDR video enhancement service client, binds the HDR video enhancement service through the video service management module, and initializes the HDR video enhancement service and performs application permission verification after the video base service client is successfully bound. The HDR video enhancement service sends a permission verification request to the cloud authentication service, the request comprises a packet name and an app id (namely an identifier distributed by the cloud server for the application), the cloud authentication service sends the permission verification request to the cloud server, a permission verification result is obtained, the permission verification result is finally returned to the HDR video enhancement service, the HDR video enhancement service sends a notification to a video service management module, so that the video service management module notifies a state monitor of successful initialization of the HDR video enhancement service, the notification is finally notified to the application, and the application can enable relevant capability of HDR video enhancement after receiving a message of successful initialization.
The video processing method shown in this embodiment relates to an authentication process of a video enhancement service, and an application can enable each capability of the video enhancement service only when having the right to use the video enhancement service of the device, so after the creation and initialization of the video base service are completed, it is necessary to check whether the application has the right to use the video enhancement service.
Fig. 10 is a flowchart of a video processing method according to an embodiment of the present application. The present embodiment relates to S704 in fig. 7, and as shown in fig. 10, S704 registers and uses various capabilities of the video enhancement service (hereinafter, described with reference to registering and using the HDR video enhancement capability as an example), and may include the steps of:
s7041 the application enables HDR video enhancement capability of Texture/Surface view through a video enhancement service client.
The application notifies the video enhancement service client to enable HDR video enhancement capabilities of Texture/Surface view by invoking the interface of the video enhancement service client.
S7042, the video enhancement service client inquires whether the video enhancement service has the HDR video enhancement capability of using the Texture/Surface view.
If the video enhancement service determines that it has rights to use the HDR video enhancement capability of Texture/Surface view, S7043 is performed.
S7043, the video enhancement service sends an indication to the display engine service indicating the HDR video enhancement capability of the text/Surface view is enabled. Thus, the application can use the HDR video enhancement capability of the device to improve the video image quality.
S7044a the display engine service sends a notification to the video enhancement service informing HDR video enhancement capabilities that can use Texture/Surface view.
S7044b the video enhancement service sends a notification to the video enhancement service client informing that the HDR video enhancement capabilities of Texture/Surface view can be used.
S7044c, the video enhancement service client sends a notification to the application informing HDR video enhancement capabilities that can use Texture/Surface view.
After applying the HDR video enhancement capabilities using device Texture/Surface view, the device's graphics processor GPU will render a single layer Surface view to increase the specified layer screen brightness (layer of video 3 as shown in FIG. 1 c), fixing the color gamut of the layer screen to the standard p3 color gamut.
In some embodiments, the brightness monitoring capability can be registered to obtain the brightness change of the layer in real time, and the dynamic color gamut mapping adjustment is realized according to the brightness change of the layer, so that the video image quality is improved. Registering the brightness listening capability may include the steps of:
S7045, the application sends an indication to the video enhancement service client, and the indication is used for registering the brightness monitoring capability.
S7046, the video enhancement service client inquires whether the video enhancement service has permission to use the brightness monitoring capability.
If the video enhancement service determines that it is authorized to use the brightness listening capability, S7047 is performed.
S7047, the video enhancement service registers brightness monitoring capability with the display engine service. The application can thus use the luminance listening capability of the device to adjust the display luminance of the screen.
S7048a, the display engine service sends a notification to the video enhancement service, and notifies successful registration of the brightness monitoring capability.
S7048b, the video enhancement service sends a notification to the video enhancement service client to notify that the brightness monitoring capability is successfully registered.
S7048c, the video enhancement service client sends a notification to the application, and the video enhancement service client is successfully registered through the brightness monitoring capability.
In this embodiment, the video enhancement service may be an HDR video enhancement service. After the application knows that the HDR video enhancement service is successfully created, the HDR video enhancement capability of the text/Surface view can be enabled through an HDR video enhancement client (which can correspond to the video enhancement service client), the HDR video enhancement client can call the HDR video enhancement service to inquire whether the current application has authority or not, whether the HDR video enhancement service has authority or not through authority management control, and if the HDR video enhancement service has the authority, the display engine service is called to carry out the HDR video enhancement of the text/Surface view so as to improve the screen brightness, realize the conversion of the color mode and the video standard p3 color gamut. After the HDR video enhancement capability is enabled, the application can also register the brightness monitoring capability through the HDR video enhancement client, and the registering process is consistent with the enabling of the HDR video enhancement capability. After the brightness monitoring capability is successfully registered, the application can acquire screen brightness information in real time and can dynamically adjust the display effect of the video according to brightness.
The video processing method shown in this embodiment mainly relates to a registration process of each capability of the video enhancement service. On the premise that the application has the permission to use the video enhancement service, the application can initiate various capabilities of registering the video enhancement service to the display engine service so as to use the various capabilities of the video enhancement service to promote the display effect of video pictures in the application.
Fig. 11 is a flowchart of a video processing method according to an embodiment of the present application. The present embodiment relates to S705 in fig. 7, and as shown in fig. 11, S705 may log out various capabilities of the video enhancement service (hereinafter, log out HDR video enhancement capability is described as an example), and may include the following steps:
s7051, the application turns off the HDR video enhancement capability of the Texture/Surface view through the video enhancement service client.
The application notifies the video enhancement service client of the HDR video enhancement capability to logout Texture/Surface view by invoking the interface of the video enhancement service client.
S7052, the video enhancement service client inquires whether the video enhancement service has the HDR video enhancement capability of closing the text/Surface view.
If the video enhancement service determines that it is authorized to turn off the HDR video enhancement capability of the Texture/Surface view, then S7053 is performed.
S7053, the video enhancement service sends an indication to the display engine service indicating the HDR video enhancement capability to turn off Texture/Surface view.
S7054a, the display engine service sends a notification to the video enhancement service informing that the HDR video enhancement capability of the Texture/Surface view has been turned off.
S7054b, the video enhancement service client sends a notification informing that the HDR video enhancement capability of the Texture/Surface view is closed.
S7054c, the video enhancement service client sends a notification to the application informing that the HDR video enhancement capability of the Texture/Surface view has been turned off.
In some embodiments, there is also a need to de-register the brightness listening capability of the video enhancement service, which may include the steps of:
s7055, the application logs out the brightness monitoring capability through the video enhancement service client.
S7056, the video enhancement service client inquires whether the video enhancement service has authority to cancel the brightness monitoring capability.
If the video enhancement service determines that there is authority to cancel the brightness listening capability, S7057 is executed.
S7057, the video enhancement service sends an indication to the display engine service, indicating to cancel the brightness monitoring capability.
S7058. the display engine service sends a notification to the video enhancement service informing that the brightness listening capability has been logged off.
S7058b, the video enhancement service client sends a notification to inform that the brightness monitoring capability is logged off.
S7058c, the video enhancement service client sends a notification to the application, and notifies that the brightness monitoring capability is logged off.
In this embodiment, when a user closes a certain video playing of an application, the application needs to call the HDR video enhancement capability interface to be closed in time to cancel the HDR video enhancement capability, and call the cancel luma listening interface to cancel the luma listening capability.
The video processing method shown in this embodiment relates to a logout procedure of each capability of the video enhancement service. When a user closes a video currently played by an application, the application can initiate to cancel various capabilities of the video enhancement service so as to release the memory space of the device and save the device resources.
In some embodiments, when the user closes the application, the HDR video enhancement client needs to be invoked in time to log off the HDR video enhancement capability, and the video basic service client needs to be invoked to log off the video basic service, so that the memory space of the device is released, and the device resources are saved.
Based on the above embodiments, the video processing scheme of the present application is fully described below in connection with one specific embodiment.
Fig. 12 is a flowchart of a video processing method according to an embodiment of the present application. As shown in fig. 12, the video processing method of the present embodiment includes the steps of:
s7011, the application sends an indication to the video foundation service client, and the video foundation service is created.
S7012, the video basic service client sends an indication to the video service management module, and the indication sets a state callback monitor.
S7013, a video service management module manages the monitor.
S7014, the video service management module sends a notification to the video basic service client through the monitor to notify that the video basic service is successfully created.
S7015, the video basic service client sends a notification to the application, and the video basic service client notifies that the video basic service is successfully created.
After the video base service is successfully created, the following steps are performed:
s7021, the application sends an indication to the video basic service client, and the indication initializes the video basic service.
S7022, the video basic service client performs initialization video basic service, inquires about related capabilities and provides interface information of the related capabilities.
S7023, the video service management module binds the video basic service through interaction with the video basic service in the media authentication service.
S7024a, the video service management module informs the video basic service client of successful binding of the video basic service.
S7024b. the video base service client notifies the application that the video base service binding was successful.
If the video basic service is initialized successfully, the following steps are executed:
s7025a, the video basic service sends a notification to the video service management module to notify that the video basic service is successfully initialized.
S7025b, the video service management module sends a notification to the application through the monitor to inform that the video basic service is successfully initialized.
In some embodiments, after the video base service is initialized successfully, the following steps may also be performed:
s1210. the application uses the HDR Vivid interface.
The application invokes the HDR Vivid interface through the video base service client.
S1220. video base service client queries whether HDR Vivid capability is available.
The video base service client queries whether it has HDR Vivid capability by interacting with the video base service.
If the video base service queries for HDR Vivid capabilities, S1230 is performed.
S1230, the video foundation service sends a notification to the video foundation service client, and the notification is successful in inquiring the HDR Vivid capability.
S1240, the video foundation service client sends a notification to the application, notifying that the HDR Vivid capability is successfully queried. To this end, the application may use the HDR Vivid capability of the device to improve video image quality.
In some embodiments, after the video basic service initialization is successful, the following steps may be performed in addition to the above steps S1210 to S1240:
s7031, the application inquires whether the video basic service client supports the video enhancement service.
If the video base service client determines that the video enhancement service is supported, S7032 is performed.
S7032, the application sends an indication to the video base service client, and the video enhancement service is created.
S7033, the video basic service client instantiates a video enhancement service client.
S7034, the video enhancement service client sends an indication to the video service management module, and the video enhancement service is indicated to be created.
S7035, the video service management module binds the video enhancement service by interacting with the video enhancement service in the media authentication service.
S7036a, the video service management module informs the video enhancement service client of successful video enhancement service binding.
S7036b, the video enhancement service client informs the video base service client that the video enhancement service binding is successful.
S7036c, the video basic service client informs the application of successful video enhancement service binding.
After S7035, the method further includes the steps of:
s7037a, the video enhancement service sends a permission verification request to the cloud authentication service. The rights verification request is for requesting verification of whether the application has rights to use the video enhancement service of the device.
S7037b, the cloud authentication service returns a permission verification result to the video enhancement service. The rights verification result is used to indicate whether the application has or has not rights to use the video enhancement service.
S7038, checking a video enhancement service management authority.
If the permission check result indicates that the application has permission to use the video enhancement service, the following steps can be executed:
s7039a, the video enhancement service sends a notification to the video service management module, and the video enhancement service is notified of successful creation.
S7039b, the video service management module sends a notification to the application through the monitor to notify that the video enhancement service is successfully created.
After the video enhancement service creation is successful, the following steps may also be performed:
s7041 the application enables HDR video enhancement capability of Texture/Surface view through a video enhancement service client.
S7042, the video enhancement service client inquires whether the video enhancement service has the HDR video enhancement capability of using the Texture/Surface view.
If the video enhancement service determines that it has rights to use the HDR video enhancement capability of Texture/Surface view, S7043 is performed.
S7043, the video enhancement service sends an indication to the display engine service indicating the HDR video enhancement capability of the text/Surface view is enabled.
S7044a the display engine service sends a notification to the video enhancement service informing HDR video enhancement capabilities that can use Texture/Surface view.
S7044b the video enhancement service sends a notification to the video enhancement service client informing that the HDR video enhancement capabilities of Texture/Surface view can be used.
S7044c, the video enhancement service client sends a notification to the application informing HDR video enhancement capabilities that can use Texture/Surface view.
S7045, the application sends an indication to the video enhancement service client, and the indication is used for registering the brightness monitoring capability.
S7046, the video enhancement service client inquires whether the video enhancement service has permission to use the brightness monitoring capability.
If the video enhancement service determines that it is authorized to use the brightness listening capability, S7047 is performed.
S7047, the video enhancement service registers brightness monitoring capability with the display engine service.
S7048a, the display engine service sends a notification to the video enhancement service, and notifies successful registration of the brightness monitoring capability.
S7048b, the video enhancement service sends a notification to the video enhancement service client to notify that the brightness monitoring capability is successfully registered.
S7048c, the video enhancement service client sends a notification to the application, and the video enhancement service client is successfully registered through the brightness monitoring capability.
Note that, the present embodiment is not limited to the execution order of the HDR video enhancement capability of the application-enabled Texture/Surface view and the application-registered luminance monitoring capability, and the above execution order is merely an example.
In some embodiments, when a user closes a video in an application, the following steps may be performed:
s7051, the application turns off the HDR video enhancement capability of the Texture/Surface view through the video enhancement service client.
S7052, the video enhancement service client inquires whether the video enhancement service has the HDR video enhancement capability of closing the text/Surface view.
If the video enhancement service determines that it is authorized to turn off the HDR video enhancement capability of the Texture/Surface view, then S7053 is performed.
S7053, the video enhancement service sends an indication to the display engine service indicating the HDR video enhancement capability to turn off Texture/Surface view.
S7054a, the display engine service sends a notification to the video enhancement service informing that the HDR video enhancement capability of the Texture/Surface view has been turned off.
S7054b, the video enhancement service client sends a notification informing that the HDR video enhancement capability of the Texture/Surface view is closed.
S7054c, the video enhancement service client sends a notification to the application informing that the HDR video enhancement capability of the Texture/Surface view has been turned off.
S7055, the application logs out the brightness monitoring capability through the video enhancement service client.
S7056, the video enhancement service client inquires whether the video enhancement service has authority to cancel the brightness monitoring capability.
If the video enhancement service determines that there is authority to cancel the brightness listening capability, S7057 is executed.
S7057, the video enhancement service sends an indication to the display engine service, indicating to cancel the brightness monitoring capability.
S7058. the display engine service sends a notification to the video enhancement service informing that the brightness listening capability has been logged off.
S7058b, the video enhancement service client sends a notification to inform that the brightness monitoring capability is logged off.
S7058c, the video enhancement service client sends a notification to the application, and notifies that the brightness monitoring capability is logged off.
Note that, in this embodiment, the execution order of the HDR video enhancement capability of the application shutdown Texture/Surface view and the execution order of the application logout luminance monitoring capability is not limited, and the above execution order is only an example.
In some embodiments, when a user closes an application, the following steps may be performed:
s7061, the application sends an indication to the video enhancement service client to indicate to log out the video enhancement service.
S7062, the video enhancement service client sends a notification to the application, informing that the video enhancement service has been logged off.
S7071, the application sends an indication to the video basic service client to indicate to log out the video basic service.
S7072, the video basic service client sends a notification to the application, and the notification that the video basic service has been logged off is sent.
The steps in the video processing method shown in this embodiment may be referred to the descriptions of fig. 7 to 11, and will not be repeated here.
The embodiment of the application provides a video processing method which is applied to electronic equipment, wherein the method comprises the steps that the electronic equipment detects the operation of playing video in a target application, and verifies whether the target application has permission to use a first video service, wherein the first video service is a service provided by the electronic equipment and used for optimizing video playing effect; and if the target application has permission to use the first video service, the electronic equipment invokes various capabilities of the first video service to optimize the image quality of the video in the target application.
In this embodiment, the first video service may correspond to the video enhancement service shown in fig. 7, and the video enhancement service includes, for example, a high dynamic range HDR video enhancement service. The target application may be a video-playing-enabled application, such as a video (including short video) class application, a game class application, a social class application, a shopping class application, and the like.
The embodiment shows that when a user triggers to play a video in a target application, the electronic device determines whether to call the display engine capability of the device to optimize the video image quality of the target application by verifying whether the target application has permission to use the display engine capability of the device, so that the problem that the display engine capability of the device cannot be fully utilized by a third party application at present is solved, and the experience of watching the video by the user is improved.
In some embodiments, detecting an operation to play a target video includes: detecting a first operation of starting a target application; or, a first operation of opening the target application is detected, and a second operation of opening the target video in the target application is detected.
The present embodiment shows two specific application scenarios:
in a scene, a user clicks to open a target application, a display interface of the target application provides a plurality of video resources, the user can click to watch any video resource in the display interface, and when the user clicks any video resource, a triggering device verifies whether the target application has permission to use a first video service.
In another scenario, the user clicks to open the target application, and the target application directly plays a certain video resource, such as a short video, and at this time, the trigger device verifies whether the target application has permission to use the first video service.
In some embodiments, the electronic device verifying whether the target application has permission to use the first video service comprises: the electronic equipment sends a first request to the cloud server, wherein the first request is used for requesting to verify whether the target application has permission to use the first video service; the electronic device receives a first response from the cloud server, wherein the first response is used for indicating that the target application has permission to use the first video service or indicating that the target application does not have permission to use the first video service.
In this embodiment, the cloud server may be a cloud server of the electronic device, where the cloud server may be configured to allocate, for the target application, a permission identifier for using the display engine capability of the electronic device, where the cloud server is further configured to check whether a certain target application has permission to use the display engine capability of the electronic device, as described in the following second identifier. The display engine capabilities include capabilities of the first video service such as HDR video enhancement, HDR Vivid, and the like.
The embodiment shows that the electronic device verifies whether the target application has permission to use the first video service of the current device through the cloud server, so that the image quality of video playing of the application is improved.
In some embodiments, the first request includes at least one of a first identification of the target application, a second identification of the target application; the first identifier is used for indicating the name of the target application, and the second identifier is an identifier distributed for the target application by the cloud server.
In this embodiment, the name of the target application may also be referred to as the package name of the target application.
The cloud server can determine whether the target application has permission to use the first video service of the device through local query (corresponding relation between the first identifier and the second identifier is stored locally), if the allocation record of the target application is stored locally, if the corresponding relation between the first identifier and the second identifier of the target application is recorded, the cloud server determines that the target application has permission to use the first video service of the device.
In some embodiments, before the electronic device invokes the capabilities of the first video service, the video processing method further comprises: the electronic device creates a first video service; the electronic device initializes a first video service.
In this embodiment, verifying the first video service, that is, verifying whether the target application has permission to use the first video service, occurs in the process of creating the first video service, and specifically, reference may be made to fig. 9.
This embodiment shows that the device needs to complete creation, initialization and authentication of the first video service before the electronic device invokes the capabilities of the first video service in order for the application to be able to use the display engine capabilities of the device.
In some embodiments, before the electronic device verifies whether the target application has the right to use the first video service, the video processing method further includes: the electronic device creates and initializes a second video service, which is an underlying service that creates the first video service.
In this embodiment, the second video service may correspond to the video base service shown in fig. 7, and the second video service may be used to manage various capabilities of the first video service.
The creation and initialization of the second video service shown in this embodiment are the preconditions of the application using the first video service, and each capability of the first video service can be used only after the creation and initialization of the second video service are completed.
In some embodiments, a third operation to close the video in the target application is detected, and the electronic device logs off the first video service.
The embodiment shows a trigger condition for logging out the first video service to release the memory space of the device and save the device resources.
In some embodiments, a fourth operation to close the target application is detected, and the electronic device logs off the second video service.
The embodiment shows a trigger condition for logging out the second video service to release the memory space of the device and save the device resources.
In some embodiments, the capabilities of the first video service include at least one of: brightness adjustment capability for high dynamic range HDR video; the ability to adjust the color of HDR video; the brightness adjusting capability of the layer corresponding to the HDR video; the ability to adjust the color of the layer to which the HDR video corresponds.
The embodiment of the application also provides electronic equipment which can be terminal equipment or circuit equipment built in the terminal equipment. The electronic device may be adapted to perform the methods or steps of the method embodiments described above.
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application, where, as shown in fig. 13, the electronic device includes a processor 1301, a communication line 1304, and at least one communication interface (the communication interface 1303 is exemplified in fig. 13).
Processor 1301 can be a general purpose central processing unit (central processing unit, CPU), microprocessor, application-specific integrated circuit (ASIC), or one or more integrated circuits for controlling the execution of the program of the present application.
Communication line 1304 may include circuitry for communicating information between the components described above.
The communication interface 1303 uses any transceiver-like device for communicating with other devices or communication networks, such as ethernet, wireless local area network (wireless local area networks, WLAN), etc.
In some embodiments, the electronic device may also include memory 1302.
The memory 1302 may be, but is not limited to, read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (random access memory, RAM) or other type of dynamic storage device that can store information and instructions, or an electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), compact disk read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and coupled to the processor via communication line 1304. The memory may also be integrated with the processor.
The memory 1302 is used for storing computer-executable instructions for performing aspects of the present application and is controlled by the processor 1301 for execution. The processor 1301 is configured to execute computer-executable instructions stored in the memory 1302, thereby implementing the video processing method provided by the embodiment of the present application.
Computer-executable instructions in embodiments of the application may also be referred to as application code, and embodiments of the application are not limited in this regard.
As an example, processor 1301 may include one or more CPUs, such as CPU0 and CPU1 in fig. 13.
As one example, an electronic device may include multiple processors, such as processor 1301 and processor 1305 in fig. 13. Each of these processors may be a single-core (single-CPU) processor or may be a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In some embodiments, the electronic device may also include a display 1306, the display 1306 for playing the processed video.
Fig. 14 is a schematic structural diagram of a chip according to an embodiment of the present application. As shown in fig. 14, the chip 1400 includes one or more (including two) processors 1420 and a communication interface 1430.
In some embodiments, memory 1440 stores the following elements: an executable module or data structure, or a subset of an executable module or data structure, or an extended set of executable modules or data structures.
In an embodiment of the application, memory 1440 may include read only memory and random access memory and provide instructions and data to processor 1420. A portion of memory 1440 may also include non-volatile random access memory (non-volatile random access memory, NVRAM).
In an embodiment of the application, memory 1440, communication interface 1430, and memory 1440 are coupled together by bus system 1410. The bus system 1410 may include a power bus, a control bus, a status signal bus, and the like, in addition to a data bus. For ease of description, the various buses are labeled as bus system 1410 in FIG. 14.
The methods described above for embodiments of the present application may be applied to the processor 1420 or implemented by the processor 1420. Processor 1420 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in processor 1420 or instructions in software. The processor 1420 may be a general purpose processor (e.g., a microprocessor or a conventional processor), a digital signal processor (digitalsignal processing, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate array (field-programmable gate array, FPGA) or other programmable logic device, discrete gates, transistor logic, or discrete hardware components, and the processor 1420 may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the application.
In the above embodiments, the instructions stored by the memory for execution by the processor may be implemented in the form of a computer program product. The computer program product may be written in advance in the memory or may be downloaded and installed in the memory in the form of software.
Embodiments of the present application also provide a computer program product comprising one or more computer instructions. When the computer program instructions are loaded and executed on the electronic device, the electronic device executes the technical scheme in the above embodiment, and the implementation principle and technical effects are similar to those of the above related embodiment, which are not repeated herein.
The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line (digital subscriber line, DSL), or wireless (e.g., infrared, wireless, microwave, etc.).
The embodiment of the application also provides a computer readable storage medium, which stores computer instructions that, when executed on an electronic device, cause the electronic device to execute the technical scheme in the above embodiment, and the implementation principle and technical effects are similar to those of the above related embodiment, and are not repeated here.
Computer readable storage media may include computer storage media and communication media and may include any medium that can transfer a computer program from one place to another. The computer readable storage medium may include: compact disc read-only memory CD-ROM, RAM, ROM, EEPROM or other optical disc storage; the computer readable storage medium may include disk storage or other disk storage devices. Moreover, any connection is properly termed a computer-readable storage medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, digital versatile disc (digital versatile disc, DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.
The foregoing is merely illustrative embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the technical scope of the present invention, and the invention should be covered. Therefore, the protection scope of the invention is subject to the protection scope of the claims.

Claims (12)

1. A video processing method, applied to an electronic device, the method comprising:
detecting the operation of playing the video in the target application, and verifying whether the target application has permission to use a first video service by the electronic equipment, wherein the first video service is a service provided by the electronic equipment and used for optimizing the video playing effect;
and if the target application has permission to use the first video service, the electronic equipment invokes various capabilities of the first video service to optimize the image quality of the video in the target application.
2. The method of claim 1, wherein detecting the operation of playing the target video comprises:
detecting a first operation of starting the target application; or alternatively
A first operation to open the target application is detected, and a second operation to open the target video in the target application is detected.
3. The method of claim 1 or 2, wherein the electronic device verifying whether the target application has permission to use the first video service comprises:
the electronic device sends a first request to a cloud server, wherein the first request is used for requesting to verify whether the target application has permission to use the first video service;
The electronic device receives a first response from the cloud server, wherein the first response is used for indicating that the target application has permission to use the first video service or indicating that the target application does not have permission to use the first video service.
4. A method according to claim 3, wherein the first request comprises at least one of a first identification of the target application, a second identification of the target application; the first identifier is used for indicating the name of the target application, and the second identifier is an identifier distributed to the target application by the cloud server.
5. The method of any of claims 1-4, wherein prior to the electronic device invoking capabilities of the first video service, the method further comprises:
the electronic device creating the first video service;
the electronic device initializes the first video service.
6. The method of any of claims 1-5, wherein prior to the electronic device verifying that the target application has permission to use the first video service, the method further comprises:
the electronic device creates and initializes a second video service, wherein the second video service is a basic service for creating the first video service.
7. The method according to any one of claims 1 to 6, further comprising:
and detecting a third operation of closing the video in the target application, wherein the electronic equipment logs off the first video service.
8. The method according to any one of claims 1 to 7, further comprising:
and detecting a fourth operation of closing the target application, wherein the electronic equipment logs off the second video service.
9. The method according to any one of claims 1 to 8, wherein the capabilities of the first video service comprise at least one of:
brightness adjustment capability for high dynamic range HDR video;
adjustment capability for color of the HDR video;
adjusting the brightness of the layer corresponding to the HDR video;
and adjusting the color of the layer corresponding to the HDR video.
10. An electronic device, the electronic device comprising: memory and a processor for invoking a computer program in the memory to perform the method of any of claims 1 to 9.
11. A computer readable storage medium storing computer instructions which, when run on an electronic device, cause the electronic device to perform the method of any one of claims 1 to 9.
12. A chip comprising a processor for invoking a computer program in memory to perform the method of any of claims 1 to 9.
CN202211467519.1A 2022-11-22 2022-11-22 Video processing method, device and storage medium Active CN116708886B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211467519.1A CN116708886B (en) 2022-11-22 2022-11-22 Video processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211467519.1A CN116708886B (en) 2022-11-22 2022-11-22 Video processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN116708886A true CN116708886A (en) 2023-09-05
CN116708886B CN116708886B (en) 2024-05-14

Family

ID=87834505

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211467519.1A Active CN116708886B (en) 2022-11-22 2022-11-22 Video processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN116708886B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106990831A (en) * 2017-04-10 2017-07-28 深圳市金立通信设备有限公司 A kind of method and terminal for adjusting screen intensity
CN110113483A (en) * 2019-04-19 2019-08-09 华为技术有限公司 Use the powerful method of the increasing of electronic equipment and relevant apparatus
CN111263188A (en) * 2020-02-17 2020-06-09 腾讯科技(深圳)有限公司 Video image quality adjusting method and device, electronic equipment and storage medium
WO2022160991A1 (en) * 2021-01-29 2022-08-04 华为技术有限公司 Permission control method and electronic device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106990831A (en) * 2017-04-10 2017-07-28 深圳市金立通信设备有限公司 A kind of method and terminal for adjusting screen intensity
CN110113483A (en) * 2019-04-19 2019-08-09 华为技术有限公司 Use the powerful method of the increasing of electronic equipment and relevant apparatus
CN111263188A (en) * 2020-02-17 2020-06-09 腾讯科技(深圳)有限公司 Video image quality adjusting method and device, electronic equipment and storage medium
WO2022160991A1 (en) * 2021-01-29 2022-08-04 华为技术有限公司 Permission control method and electronic device

Also Published As

Publication number Publication date
CN116708886B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN113542757B (en) Image transmission method and device for cloud application, server and storage medium
CN113244614B (en) Image picture display method, device, equipment and storage medium
EP4280056A1 (en) Method for application performing drawing operation, and electronic device
JP2013508869A (en) Application image display method and apparatus
CN112527174B (en) Information processing method and electronic equipment
CN110968395B (en) Method for processing rendering instruction in simulator and mobile terminal
CN112527222A (en) Information processing method and electronic equipment
CN115065684B (en) Data processing method, apparatus, device and medium
WO2021052488A1 (en) Information processing method and electronic device
CN109597595A (en) Control method, device, computer equipment and the storage medium of liquid crystal display
CN116708886B (en) Video processing method, device and storage medium
CN116048955B (en) Test method and electronic equipment
WO2023035619A1 (en) Scene rendering method and apparatus, device and system
US20240012899A1 (en) Flexible authorization access control method, related apparatus, and system
CN111159734A (en) Communication terminal and multi-application data inter-access processing method
US20240184504A1 (en) Screen projection method and system, and related apparatus
EP4296845A1 (en) Screen projection method and system, and related apparatus
WO2024099206A1 (en) Graphical interface processing method and apparatus
CN115981576B (en) Method for sharing data, electronic device and storage medium
CN116700819B (en) Method and device for starting camera hardware module and storage medium
WO2023185636A1 (en) Image display method, and electronic devices
CN112749022B (en) Camera resource access method, operating system, terminal and virtual camera
WO2024114534A1 (en) Data and video processing methods and apparatuses
CN115766822A (en) Screen capture method and device, screen capture system, user equipment and cloud server
CN117130698A (en) Menu display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant