CN117806586A - Display device and data processing method - Google Patents

Display device and data processing method Download PDF

Info

Publication number
CN117806586A
CN117806586A CN202211737596.4A CN202211737596A CN117806586A CN 117806586 A CN117806586 A CN 117806586A CN 202211737596 A CN202211737596 A CN 202211737596A CN 117806586 A CN117806586 A CN 117806586A
Authority
CN
China
Prior art keywords
sound data
port
memory address
party application
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211737596.4A
Other languages
Chinese (zh)
Inventor
张来智
韩征
于皓丞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN202211737596.4A priority Critical patent/CN117806586A/en
Publication of CN117806586A publication Critical patent/CN117806586A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a display device comprising a display; the controller is connected with the display and is provided with a client port built in the target application and a service port corresponding to the client port; the controller is configured to: acquiring an array of sound data corresponding to the original sound data in the audio hardware layer through a service port; transmitting the array sound data to the client port through the service port; and transmitting the memory address of the original sound data to a third party application built in the target application through the client port based on the array sound data, wherein the memory address is used for the third party application to acquire the original sound data.

Description

Display device and data processing method
Technical Field
Embodiments of the present application relate to data processing technology. And more particularly, to a display apparatus and a data processing method.
Background
In the process of performing audio interaction between the display device and the user, if processing is required for the audio resource, the audio resource needs to be transmitted to related applications for processing in a cross-process manner, and under the condition of higher transmission frequency, resources of a central processing unit (Central Processing Unit, CPU) of the higher display device are consumed, so that a system of the display device is blocked.
Disclosure of Invention
The exemplary embodiment of the application provides a display device and a data processing method, which can reduce the use of CPU resources of the display device and avoid the system blocking of the display device.
In a first aspect, an embodiment of the present application provides a display device, including:
a display;
the controller is connected with the display and is provided with a client port built in the target application and a service port corresponding to the client port;
the controller is configured to:
acquiring an array of sound data corresponding to the original sound data in the audio hardware layer through a service port;
transmitting the array sound data to the client port through the service port;
and transmitting the memory address of the original sound data to a third party application built in the target application through the client port based on the array sound data, wherein the memory address is used for the third party application to acquire the original sound data.
In some embodiments of the present application, in the step of acquiring, through the service port, an array of sound data corresponding to the original sound data in the audio hardware layer, the controller is further configured to:
calling a first interface of a hardware abstraction layer through a service port to acquire original sound data from an audio hardware layer;
the original sound data is assembled into a plurality of sets of sound data through the service port.
In some embodiments of the present application, in the step of transmitting, through the client port, the memory address of the original sound data to a third party application built in the target application based on the array sound data, the controller is further configured to:
acquiring a pointer of a first element of an array memory pointed by the array sound data through a client port to obtain a memory address of the original sound data;
and transmitting the memory address to the third party application through the client port.
In some embodiments of the present application, in the step of transmitting the memory address to the third party application through the client port, the controller is further configured to:
converting the data format of the memory address into a first data type through the client port;
and transmitting the memory address of the first data type to the third party application through the client port.
In some embodiments of the present application, in the step of transmitting the memory address to the third party application through the client port, the controller is further configured to:
the memory address is transmitted to the third party application through the client port based on the transmission port, the language type of the client port is different from that of the transmission port.
In some embodiments of the present application, in the step of transmitting the memory address to the third party application via the client port based on the transmission port, the controller is further configured to:
calling a second interface of the transmission port through the client port to transmit the memory address to the transmission port;
and calling a third interface of the third party application through the transmission port to transmit the memory address to the third party application.
In a second aspect, an embodiment of the present application provides a data processing method, which is applied to a display device, including:
acquiring an array of sound data corresponding to the original sound data in the audio hardware layer through a service port;
transmitting the array sound data to a client port corresponding to the service port through the service port, wherein the client port is built in the target application;
and transmitting the memory address of the original sound data to a third party application built in the target application through the client port based on the array sound data, wherein the memory address is used for the third party application to acquire the original sound data.
In some embodiments of the present application, obtaining, through a service port, an array of sound data corresponding to original sound data in an audio hardware layer includes:
calling a first interface of a hardware abstraction layer through a service port to acquire original sound data from an audio hardware layer;
the original sound data is assembled into a plurality of sets of sound data through the service port.
In some embodiments of the present application, transmitting, by a client port, a memory address of original sound data to a third party application built in a target application based on an array of sound data, including:
acquiring a pointer of a first element of an array memory pointed by the array sound data through a client port to obtain a memory address of the original sound data;
and transmitting the memory address to the third party application through the client port.
In some embodiments of the present application, transmitting the memory address to the third party application through the client port includes:
converting the data format of the memory address into a first data type through the client port;
and transmitting the memory address of the first data type to the third party application through the client port.
In some embodiments of the present application, transmitting the memory address to the third party application through the client port includes:
the memory address is transmitted to the third party application through the client port based on the transmission port, the language type of the client port is different from that of the transmission port.
In some embodiments of the present application, transmitting, by a client port, a memory address to a third party application based on a transmission port includes:
calling a second interface of the transmission port through the client port to transmit the memory address to the transmission port;
and calling a third interface of the third party application through the transmission port to transmit the memory address to the third party application.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: in the embodiment of the application, the target application in the display device and the built-in third party application can realize the cross-process transmission of the original sound data based on the memory address of the original sound data, so that the data volume required to be transmitted is smaller in the process of transmitting the original sound data in a cross-process manner, and even under the condition of higher transmission frequency, CPU resources of the display device are not consumed, and system blocking of the display device is avoided.
Drawings
In order to more clearly illustrate the embodiments of the present application or the implementation in the related art, a brief description will be given below of the drawings required for the embodiments or the related art descriptions, and it is apparent that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings for those of ordinary skill in the art.
FIG. 1 illustrates an operational scenario between a display device and a control apparatus according to some embodiments;
fig. 2 shows a hardware configuration block diagram of the control device 100 according to some embodiments;
fig. 3 illustrates a hardware configuration block diagram of a display device 200 according to some embodiments;
FIG. 4 illustrates a software configuration diagram in a display device 200 according to some embodiments;
FIG. 5 illustrates a schematic diagram of a process of operation of a controller of a display device according to some embodiments;
FIG. 6 illustrates a schematic diagram of a process of servicing a port of a controller according to some embodiments;
FIG. 7 illustrates a schematic diagram of a process of operation of a client port of a controller, in accordance with some embodiments;
FIG. 8 illustrates a flow diagram of a data processing method according to some embodiments.
Detailed Description
For purposes of clarity and implementation of the present application, the following description will make clear and complete descriptions of exemplary implementations of the present application with reference to the accompanying drawings in which exemplary implementations of the present application are illustrated, it being apparent that the exemplary implementations described are only some, but not all, of the examples of the present application.
It should be noted that the brief description of the terms in the present application is only for convenience in understanding the embodiments described below, and is not intended to limit the embodiments of the present application. Unless otherwise indicated, these terms should be construed in their ordinary and customary meaning.
The terms "first," second, "" third and the like in the description and in the claims and in the above drawings are used for distinguishing between similar or similar objects or entities and not necessarily for describing a particular sequential or chronological order, unless otherwise indicated. It is to be understood that the terms so used are interchangeable under appropriate circumstances.
The terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a product or apparatus that comprises a list of elements is not necessarily limited to all elements explicitly listed, but may include other elements not expressly listed or inherent to such product or apparatus.
The display device provided in the embodiment of the application may have various implementation forms, for example, may be a television, an intelligent television, a laser projection device, a display (monitor), an electronic whiteboard (electronic bulletin board), an electronic desktop (electronic table), and the like. Fig. 1 and 2 are specific embodiments of a display device of the present application.
Fig. 1 is a schematic diagram of an operation scenario between a display device and a control apparatus according to an embodiment. As shown in fig. 1, a user may operate the display device 200 through the smart device 300 or the control apparatus 100.
In some embodiments, the control apparatus 100 may be a remote controller, and the communication between the remote controller and the display device includes infrared protocol communication or bluetooth protocol communication, and other short-range communication modes, and the display device 200 is controlled by a wireless or wired mode. The user may control the display device 200 by inputting user instructions through keys on a remote control, voice input, control panel input, etc.
In some embodiments, a smart device 300 (e.g., mobile terminal, tablet, computer, notebook, etc.) may also be used to control the display device 200. For example, the display device 200 is controlled using an application running on a smart device.
In some embodiments, the display device may receive instructions not using the smart device or control device described above, but rather receive control of the user by touch or gesture, or the like.
In some embodiments, the display device 200 may also perform control in a manner other than the control apparatus 100 and the smart device 300, for example, the voice command control of the user may be directly received through a module configured inside the display device 200 device for acquiring voice commands, or the voice command control of the user may be received through a voice control device configured outside the display device 200 device.
In some embodiments, the display device 200 is also in data communication with a server 400. The display device 200 may be permitted to make communication connections via a Local Area Network (LAN), a Wireless Local Area Network (WLAN), and other networks. The server 400 may provide various contents and interactions to the display device 200. The server 400 may be a cluster, or may be multiple clusters, and may include one or more types of servers.
Fig. 2 exemplarily shows a hardware configuration block diagram of the control apparatus 100 in accordance with an exemplary embodiment. As shown in fig. 2, the control device 100 includes a controller 110, a communication interface 130, a user input/output interface 140, a memory, and a power supply. The control apparatus 100 may receive an input operation instruction of a user and convert the operation instruction into an instruction recognizable and responsive to the display device 200, and function as an interaction between the user and the display device 200.
As shown in fig. 3, the display apparatus 200 includes at least one of a modem 210, a communicator 220, a detector 230, an external device interface 240, a controller 250, a display 260, an audio output interface 270, a memory, a power supply, and a user interface.
In some embodiments the controller includes a processor, a video processor, an audio processor, a graphics processor, RAM, ROM, a first interface for input/output to an nth interface.
The display 260 includes a display screen component for presenting a picture, and a driving component for driving an image display, a component for receiving an image signal from the controller output, displaying video content, image content, and a menu manipulation interface, and a user manipulation UI interface.
The display 260 may be a liquid crystal display, an OLED display, a projection device, or a projection screen.
The communicator 220 is a component for communicating with external devices or servers according to various communication protocol types. For example: the communicator may include at least one of a Wifi module, a bluetooth module, a wired ethernet module, or other network communication protocol chip or a near field communication protocol chip, and an infrared receiver. The display device 200 may establish transmission and reception of control signals and data signals with the external control device 100 or the server 400 through the communicator 220.
A user interface, which may be used to receive control signals from the control device 100 (e.g., an infrared remote control, etc.).
The detector 230 is used to collect signals of the external environment or interaction with the outside. For example, detector 230 includes a light receiver, a sensor for capturing the intensity of ambient light; alternatively, the detector 230 includes an image collector such as a camera, which may be used to collect external environmental scenes, user attributes, or user interaction gestures, or alternatively, the detector 230 includes a sound collector such as a microphone, or the like, which is used to receive external sounds.
The external device interface 240 may include, but is not limited to, the following: high Definition Multimedia Interface (HDMI), analog or data high definition component input interface (component), composite video input interface (CVBS), USB input interface (USB), RGB port, or the like. The input/output interface may be a composite input/output interface formed by a plurality of interfaces.
The modem 210 receives broadcast television signals through a wired or wireless reception manner, and demodulates audio and video signals, such as EPG data signals, from a plurality of wireless or wired broadcast television signals.
In some embodiments, the controller 250 and the modem 210 may be located in separate devices, i.e., the modem 210 may also be located in an external device to the main device in which the controller 250 is located, such as an external set-top box or the like.
The controller 250 controls the operation of the display device and responds to the user's operations through various software control programs stored on the memory. The controller 250 controls the overall operation of the display apparatus 200. For example: in response to receiving a user command to select a UI object to be displayed on the display 260, the controller 250 may perform an operation related to the object selected by the user command.
In some embodiments the controller includes at least one of a CPU, a video processor, an audio processor, a graphics processor (Graphics Processing Unit, GPU), RAM Random Access Memory, RAM), ROM (Read-Only Memory), first to nth interfaces for input/output, a communication Bus (Bus), and the like.
The user may input a user command through a Graphical User Interface (GUI) displayed on the display 260, and the user input interface receives the user input command through the Graphical User Interface (GUI). Alternatively, the user may input the user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
A "user interface" is a media interface for interaction and exchange of information between an application or operating system and a user that enables conversion between an internal form of information and a user-acceptable form. A commonly used presentation form of the user interface is a graphical user interface (Graphic User Interface, GUI), which refers to a user interface related to computer operations that is displayed in a graphical manner. It may be an interface element such as an icon, a window, a control, etc. displayed in a display screen of the electronic device, where the control may include a visual interface element such as an icon, a button, a menu, a tab, a text box, a dialog box, a status bar, a navigation bar, a Widget, etc.
Referring to FIG. 4, in some embodiments, the system is divided into four layers, from top to bottom, an application layer (simply "application layer"), an application framework layer (Application Framework) layer (simply "framework layer"), a An Zhuoyun row (Android run) and a system library layer (simply "system runtime layer"), and a kernel layer, respectively.
In some embodiments, at least one application program is running in the application program layer, and these application programs may be a Window (Window) program of an operating system, a system setting program, a clock program, or the like; or may be an application developed by a third party developer. In particular implementations, the application packages in the application layer are not limited to the above examples.
The framework layer provides an application programming interface (application programming interface, API) and programming framework for the application. The application framework layer includes a number of predefined functions. The application framework layer corresponds to a processing center that decides to let the applications in the application layer act. Through the API interface, the application program can access the resources in the system and acquire the services of the system in the execution.
As shown in fig. 4, the application framework layer in the embodiment of the present application includes a manager (manager), a Content Provider (Content Provider), and the like, where the manager includes at least one of the following modules: an Activity Manager (Activity Manager) is used to interact with all activities that are running in the system; a Location Manager (Location Manager) is used to provide system services or applications with access to system Location services; a Package Manager (Package Manager) for retrieving various information about an application Package currently installed on the device; a notification manager (Notification Manager) for controlling the display and clearing of notification messages; a Window Manager (Window Manager) is used to manage bracketing icons, windows, toolbars, wallpaper, and desktop components on the user interface.
In some embodiments, the activity manager is used to manage the lifecycle of the individual applications as well as the usual navigation rollback functions, such as controlling the exit, opening, fallback, etc. of the applications. The window manager is used for managing all window programs, such as obtaining the size of the display screen, judging whether a status bar exists or not, locking the screen, intercepting the screen, controlling the change of the display window (for example, reducing the display window to display, dithering display, distorting display, etc.), etc.
In some embodiments, the system runtime layer provides support for the upper layer, the framework layer, and when the framework layer is in use, the android operating system runs the C/C++ libraries contained in the system runtime layer to implement the functions to be implemented by the framework layer.
In some embodiments, the kernel layer is a layer between hardware and software. As shown in fig. 4, the kernel layer contains at least one of the following drivers: audio drive, display drive, bluetooth drive, camera drive, WIFI drive, USB drive, HDMI drive, sensor drive (e.g., fingerprint sensor, temperature sensor, pressure sensor, etc.), and power supply drive, etc.
The Android system based on the architecture is an operating system based on a Linu×system, and in order to facilitate development of a developer, an Inter-process communication (Inter-Process CommunicationIPC) mode is already packaged in advance, for example, an Android component Broadcast, service, provider is used. In the IPC process, frequent data transfer between processes consumes a large amount of CPU resources, and the memory space of the display device itself based on the Android system is small, so that a developer needs to consider the performance optimization of the CPU of the display device.
Specifically, in the process that the display device performs audio interaction with the user, if processing is required for the audio resource, the audio resource needs to be transmitted to related applications in a cross-process manner for processing, for example, an automatic content recognition (Automatic Content Recognition, ACR) application of the display device needs to acquire original sound data of the current playing audio, and then the original sound data is transmitted to a third party software development kit (Software Development Kit, SDK) built in the ACR application, and the third party SDK is responsible for transmitting the processed original sound data to a remote server to realize content recognition of the audio resource by the remote server.
In the related art, the above-mentioned audio resource cross-process transmission process basically uses a manner of transmitting a data object, that is, original sound data, so that the amount of transmission data is large, and in the case of a high transmission frequency, the CPU resource of the display device is consumed, which results in system blocking of the display device.
In order to solve the above technical problem, a first aspect of embodiments of the present application shows a display device, which may include a display, and a controller connected to the display. The controller may have a client port built in the target application and a service port corresponding to the client port.
The target application, the client port and the service port are all located in an application program layer of the system.
The target application may be any application used in the process of audio interaction between the display device and the user, and is not limited herein.
The client port may be a client with a voice data transmission function built in the target application, and the client port shares one process with the target application. The service port can be a service end with a voice data transmission function, and the service port independently uses a process and can realize inter-process communication with a client with the voice data transmission function of a target application.
The service port and the client port can jointly achieve the purpose that the target application obtains the original sound data to be processed and transmits the original sound data to a third party application built in the target application.
The third party application may be a third party SDK built in the target application for implementing sound data processing.
Further, the operation process of the controller of the display device may refer to fig. 5. As shown in fig. 5, the controller is configured to:
s502, acquiring array sound data corresponding to the original sound data in the audio hardware layer through the service port.
In some embodiments, in a case that the target application needs to transmit the original sound data to be processed to a third party application built in the target application, the controller may acquire, through the service port, an array of sound data corresponding to the original sound data from the audio hardware layer.
The audio hardware layer is located in the kernel layer, namely an audio driver in the kernel layer. The controller can acquire the array sound data corresponding to the original sound data from the kernel space through the service port, and cache the array sound data to a cache area of the user space.
Further, the array of sound data may be the original sound data stored in an array formation.
S504, transmitting the array sound data to the client port through the service port.
In some embodiments, after the array sound data is obtained through the service port, the controller may transmit the array sound data to the client port of the target application through the service port, so as to achieve the obtaining of the array sound data by the target application, and further continuously achieve the data transmission of the original sound data by the target application to the third party application built in the target application.
S506, transmitting the memory address of the original sound data to a third party application built in the target application through the client port based on the array sound data, wherein the memory address is used for the third party application to acquire the original sound data.
In some embodiments, after the array of sound data is obtained through the client port, the controller may transmit the memory address of the original sound data to a third party application built in the target application through the client port based on the array of sound data, so that the third party application may obtain the original sound data through the memory address of the original sound data, and further process the original sound data, or transmit the original sound data or the processed sound data to the remote server.
Specifically, the controller may determine the memory address of the original sound data based on the array sound data through the client port, and transmit the memory address of the original sound data to a third party application built in the target application through the client port.
In the embodiment of the application, the target application in the display device and the built-in third party application can realize the cross-process transmission of the original sound data based on the memory address of the original sound data, so that the data volume required to be transmitted is smaller in the process of transmitting the original sound data in a cross-process manner, and even under the condition of higher transmission frequency, CPU resources of the display device are not consumed, and system blocking of the display device is avoided.
As an embodiment, the target application may be an ACR application of the display device, and the client port may be an hidlc++ client integrated in the ACR application, where the ACR application and the HIDL c++ client share one process. The service port may be an HIDL server. The third party application may be a c++ SDK. The ACR application can directly transmit the memory address of the original sound data to the C++ SDK in the ACR application through the HIDLC++ client, and compared with the method for directly transmitting the original sound data and the defects caused by C++ and Java data format type conversion, the method can effectively reduce CPU use.
In some embodiments, in step S502, the controller may be further configured to:
calling a first interface of a hardware abstraction layer through a service port to acquire original sound data from an audio hardware layer;
the original sound data is assembled into a plurality of sets of sound data through the service port.
In these embodiments, the controller may invoke the first interface of the hardware abstraction layer through the service port to obtain the raw sound data from the audio hardware layer and cache it to the cache area of the user space.
Further, the controller may call an array assembly function of the client port corresponding to the array type through the service port, assemble the original sound data into the array sound data, and buffer the array sound data in the buffer area of the user space.
As an embodiment, as shown in fig. 6, the hardware abstraction layer may be a HAL hardware abstraction layer, in which a hardware interface for acquiring sound data from an audio hardware layer, i.e., an audio driver, i.e., a first interface, is provided. The controller may invoke the hardware interface through the HIDL server to copy the original sound data from the kernel space to the buffer area of the user space.
Alternatively, the original sound data copied by the controller through the hardware interface may be pcm sound data, for example, uint16_tbuffer data.
Different array assembly functions may assemble the original sound data into array sound data of different array types. For example, hidl_vec: : the setToExternal () function may initialize the hidl_vec < T > to point to an external data buffer of the T type, and further assemble the uin16_tXbuffer data into hidljvec < uin16_t > vec_data_u16{ }, vec_data_u16.SetToExternal (buffer), whereby the HIDL server may copy 16-bit hidl_vec < uin16_t > data vector array sound data to the HIDL C++ client.
In the embodiment of the application, the original sound data of the kernel control needs to be transferred to the application program layer for cross-process communication, specifically, the original sound data needs to be transferred from the audio hardware layer to the application program layer, wherein the audio hardware layer to the application program layer can be subjected to copying of the data and conversion from Java to C/C++ different data formats. The Android HIDL server and the C++ client can be used for realizing cross-process communication in the transmission process, and the data quantity required to be transmitted can be further reduced by firstly assembling the original sound data into a plurality of groups of sound data and then carrying out data transmission, so that the CPU resource consumption of the display equipment is reduced, and the system blocking of the display equipment is further avoided.
In some embodiments, in the step of S506, the controller may be further configured to:
acquiring a pointer of a first element of an array memory pointed by the array sound data through a client port to obtain a memory address of the original sound data;
and transmitting the memory address to the third party application through the client port.
In these embodiments, the controller may call the pointer function through the client port, obtain the pointer of the first element of the array memory pointed by the array sound data, obtain the memory address of the original sound data, and then transmit the memory address, i.e. the pointer, to the third party application through the client port.
As one embodiment, the HIDL C++ client integrated in the ACR application may obtain the pointer of the first element of the array memory pointed by the returned built-in vector by calling the array sound data vector data.data () method function, and use the pointer as the memory address of the uint16_t buffer data, and then the HIDL C++ client may transmit the pointer to the C++ SDK, thereby realizing the transmission of the original sound data from the HIDL server to the C++ SDK.
In the embodiment of the application, the transmission of the cross-process sound data can be realized by transmitting the pointer, so that the transmission is not limited by the data quantity, and the CPU resource is used less even under the condition of high transmission frequency.
In some embodiments, in the step of transmitting the memory address to the third party application through the client port, the controller may be further configured to:
converting the data format of the memory address into a first data type through the client port;
and transmitting the memory address of the first data type to the third party application through the client port.
In these embodiments, the controller may convert the data format of the memory address to a first data type required by the third party application through the client port, and then transmit the memory address of the first data type to the third party application through the client port.
As one embodiment, because the C++ SDK requires short-cut data, the HIDL C++ client needs to convert the memory address into data format, and further obtains the short-cut type pointer required by the C++ SDK through the (short-cut) data.data () function, and then the HIDL C++ client transfers the pointer to the C++ SDK.
In the embodiment of the application, the ACR application can directly transfer the pointers of the types required by the c++ SDK to the c++ SDK through the HIDL c++ client, and compared with the direct transfer of the original sound data and the defects caused by the conversion of the types of the c++ and Java data formats, the ACR application can further reduce the CPU usage.
In some embodiments, in the step of transmitting the memory address to the third party application through the client port, the controller may be further configured to:
the memory address is transmitted to the third party application through the client port based on the transmission port, the language type of the client port is different from that of the transmission port.
In these embodiments, the controller may transmit the memory address to the third party application via the client port based on the transmission port.
Because the data transmission needs to be realized through the transmission port between different applications, the language types of the client port and the third party application are the same, and the language types of the client port and the transmission port are different, the client port needs to further convert the memory address into a data format of the voice type of the transmission port and transmit the converted memory address to the transmission port, and the transmission port restores the data format of the memory address received by the transmission port into the data format of the voice type of the client port and transmits the converted memory address to the third party application so as to realize the transmission of the memory address.
As an embodiment, the transmission port is a JAVA layer, the pointer acquired by the HIDL c++ client is in a c++ data format, the pointer needs to be converted into the JAVA data format and transmitted to the JAVA layer, and the JAVA layer needs to restore the pointer into the c++ data format and transmit the pointer to the c++ SDK.
In some embodiments, in the step of transmitting the memory address to the third party application via the client port based on the transmission port, the controller may be further configured to:
calling a second interface of the transmission port through the client port to transmit the memory address to the transmission port;
and calling a third interface of the third party application through the transmission port to transmit the memory address to the third party application.
In these embodiments, the controller may call the second interface of the transmission port through the client port to further convert the memory address into a voice type data format of the transmission port, and transmit the converted memory address to the transmission port, and call the third interface of the third party application through the transmission port to restore the data format of the memory address received by the controller to a voice type data format of the client port, and transmit the converted memory address to the third party application, so as to implement data format type conversion and transmission of the memory address.
As an embodiment, the second interface and the third interface may be JIN layers, respectively, as shown in fig. 7, where the Android JAVA layer directly calls an audio API of the HIDL c++ client by way of JNI, listens to the HIDL c++ client, so that the HIDL c++ client can call a method in the JAVA layer by way of JNI after acquiring the voice data of the hidl_vec < uint16_t > data vector array, so as to transfer the pointer short_ptr of the voice data of the hidl_vec < uint16_t > data vector array to the JAVA layer, specifically, save the pointer to a jlong type to obtain the pointer jlong in the JNI layer by calling a reinterpre_cast (ptr) function, and then transfer the pointer as a parameter to the JAVA layer by way of the JNI method callback in the JNI layer. For example, the JAVA class to be called is obtained through jcals cbclass=env- > GetObjectClass () function, the method to be called in the JAVA class is obtained through jmethodID cbMethod =env- > getmethod id () function, and then the called env- > calivoidtmethod () function is transferred to the JAVA layer with a pointer, wherein the jlong of the JNI corresponds to the long type of JAVA, and thus the pointer long ptr is obtained by transferring to the JAVA layer.
Further, the JAVA layer may call the API of the c++ SDK through the JIN layer to transfer the pointer to the c++ SDK, specifically, call a reinterpre_cast < jshort > (ptr) function at the JNI layer, convert the pointer from a jlong type to a jshort_buffer type to obtain the pointer jshort_buffer, and then call the apiport_audio (short_buffer) of the c++ SDK to transfer the pointer to the c++ SDK, so that the c++ SDK receives the pointer short.
In the embodiment of the application, the ACR application can directly transfer pointers of types required by the c++ SDK to the c++ SDK through the HIDL c++ client, so that the data volume of data format type conversion required in cross-process communication through JAVA layer processes is reduced, and the CPU usage can be further reduced.
A second aspect of the embodiments of the present application shows a data processing method, which may be applied to a display device, and specifically may refer to fig. 8, where the data processing method includes the following steps:
s802, acquiring an array of sound data corresponding to the original sound data in the audio hardware layer through the service port.
S804, transmitting the array sound data to a client port corresponding to the service port through the service port, wherein the client port is built in the target application.
S806, based on the array sound data through the client port, the memory address of the original sound data is transmitted to a third party application built in the target application, and the memory address is used for the third party application to acquire the original sound data.
In the embodiment of the application, the target application in the display device and the built-in third party application can realize the cross-process transmission of the original sound data based on the memory address of the original sound data, so that the data volume required to be transmitted is smaller in the process of transmitting the original sound data in a cross-process manner, and even under the condition of higher transmission frequency, CPU resources of the display device are not consumed, and system blocking of the display device is avoided.
In some embodiments, S802 may specifically include:
calling a first interface of a hardware abstraction layer through a service port to acquire original sound data from an audio hardware layer;
the original sound data is assembled into a plurality of sets of sound data through the service port.
In the embodiment of the application, the service port and the client port can be used for realizing cross-process communication in the transmission process, and the data quantity required to be transmitted can be further reduced by firstly assembling the original sound data into a plurality of groups of sound data and then carrying out data transmission, so that the CPU resource consumption of the display equipment is reduced, and the system blocking of the display equipment is further avoided.
In some embodiments, S806 may specifically include:
acquiring a pointer of a first element of an array memory pointed by the array sound data through a client port to obtain a memory address of the original sound data;
and transmitting the memory address to the third party application through the client port.
In the embodiment of the application, the transmission of the cross-process sound data can be realized by transmitting the pointer, so that the transmission is not limited by the data quantity, and the CPU resource is used less even under the condition of high transmission frequency.
In some embodiments, transmitting the memory address to the third party application through the client port may specifically include:
converting the data format of the memory address into a first data type through the client port;
and transmitting the memory address of the first data type to the third party application through the client port.
In the embodiment of the application, the client port can directly transmit the pointer of the type required by the third party application to the third party application, and compared with the disadvantage caused by directly transmitting the original sound data and converting the data format type, the CPU usage can be further reduced.
In some embodiments, transmitting the memory address to the third party application through the client port may specifically include:
the memory address is transmitted to the third party application via the client port based on the transmission port, the client port being of a different voice type than the transmission port.
Further, transmitting, by the client port, the memory address to the third party application based on the transmission port may specifically include:
calling a second interface of the transmission port through the client port to transmit the memory address to the transmission port;
and calling a third interface of the third party application through the transmission port to transmit the memory address to the third party application.
In the embodiment of the application, the client port can directly transmit the pointer of the type required by the third party application to the third party application, so that the data volume of data format type conversion required by the process of cross-process communication through the transmission port is reduced, and the CPU use can be further reduced.
The embodiment of the invention also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements each process executed by the data processing method, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The computer readable storage medium may be a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, an optical disk, or the like.
The present invention provides a computer program product comprising: the computer program product, when run on a computer, causes the computer to implement the data processing method described above.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present application.
The foregoing description, for purposes of explanation, has been presented in conjunction with specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the embodiments to the precise forms disclosed above. Many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles and the practical application, to thereby enable others skilled in the art to best utilize the embodiments and various embodiments with various modifications as are suited to the particular use contemplated.

Claims (12)

1. A display device, characterized by comprising:
a display;
the controller is connected with the display and is provided with a client port built in a target application and a service port corresponding to the client port;
the controller is configured to:
acquiring an array of sound data corresponding to the original sound data in the audio hardware layer through the service port;
transmitting the array sound data to the client port through the service port;
and transmitting the memory address of the original sound data to a third party application built in the target application through the client port based on the array sound data, wherein the memory address is used for the third party application to acquire the original sound data.
2. The display device of claim 1, wherein in the step of acquiring, through the service port, an array of sound data corresponding to original sound data within an audio hardware layer, the controller is further configured to:
invoking a first interface of a hardware abstraction layer through the service port to obtain the original sound data from the audio hardware layer;
and assembling the original sound data into the array sound data through the service port.
3. The display device of claim 1, wherein in the step of transmitting the memory address of the original sound data to a third party application built in the target application based on the array sound data through the client port, the controller is further configured to:
acquiring a pointer of a first element of an array memory pointed by the array sound data through the client port, and obtaining the memory address of the original sound data;
and transmitting the memory address to the third party application through the client port.
4. The display device of claim 3, wherein in the step of transmitting the memory address to the third party application through the client port, the controller is further configured to:
converting the data format of the memory address into a first data type through the client port;
and transmitting the memory address of the first data type to the third party application through the client port.
5. The display device of claim 3, wherein in the step of transmitting the memory address to the third party application through the client port, the controller is further configured to:
and transmitting the memory address to the third party application through the client port based on a transmission port, wherein the language type of the client port is different from that of the transmission port.
6. The display device of claim 3, wherein in the step of transmitting the memory address to the third party application via the client port based on a transmission port, the controller is further configured to:
invoking a second interface of the transmission port through the client port to transmit the memory address to the transmission port;
and calling a third interface of the third party application through the transmission port so as to transmit the memory address to the third party application.
7. A data processing method, applied to a display device, comprising:
acquiring an array of sound data corresponding to the original sound data in the audio hardware layer through a service port;
transmitting the array sound data to a client port corresponding to the service port through the service port, wherein the client port is built in a target application;
and transmitting the memory address of the original sound data to a third party application built in the target application through the client port based on the array sound data, wherein the memory address is used for the third party application to acquire the original sound data.
8. The method of claim 7, wherein the obtaining, through the service port, the array of sound data corresponding to the original sound data in the audio hardware layer, comprises:
invoking a first interface of a hardware abstraction layer through the service port to obtain the original sound data from the audio hardware layer;
and assembling the original sound data into the array sound data through the service port.
9. The display device of claim 7, wherein the transmitting, via the client port, the memory address of the original sound data to the third party application built in the target application based on the array of sound data, comprises:
acquiring a pointer of a first element of an array memory pointed by the array sound data through the client port, and obtaining the memory address of the original sound data;
and transmitting the memory address to the third party application through the client port.
10. The display device of claim 9, wherein the transmitting the memory address to the third party application via the client port comprises:
converting the data format of the memory address into a first data type through the client port;
and transmitting the memory address of the first data type to the third party application through the client port.
11. The display device of claim 9, wherein the transmitting the memory address to the third party application via the client port comprises:
and transmitting the memory address to the third party application through the client port based on a transmission port, wherein the language type of the client port is different from that of the transmission port.
12. The display device of claim 9, wherein the transmitting the memory address to the third party application via the client port based on a transmission port comprises:
invoking a second interface of the transmission port through the client port to transmit the memory address to the transmission port;
and calling a third interface of the third party application through the transmission port so as to transmit the memory address to the third party application.
CN202211737596.4A 2022-12-30 2022-12-30 Display device and data processing method Pending CN117806586A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211737596.4A CN117806586A (en) 2022-12-30 2022-12-30 Display device and data processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211737596.4A CN117806586A (en) 2022-12-30 2022-12-30 Display device and data processing method

Publications (1)

Publication Number Publication Date
CN117806586A true CN117806586A (en) 2024-04-02

Family

ID=90422285

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211737596.4A Pending CN117806586A (en) 2022-12-30 2022-12-30 Display device and data processing method

Country Status (1)

Country Link
CN (1) CN117806586A (en)

Similar Documents

Publication Publication Date Title
US12061833B2 (en) Multi-window display method, electronic device, and system
CN113064645B (en) Startup interface control method and display device
CN114339332B (en) Mobile terminal, display device and cross-network screen projection method
CN115278822A (en) Display apparatus, control apparatus, and broadcast source scanning method
CN112601042B (en) Display device, server and method for video call to be compatible with different protocol signaling
CN112256449B (en) Interface calling method of webpage application program, display equipment and server
CN111818654A (en) Channel access method and display device
CN113438553B (en) Display device awakening method and display device
WO2022193732A1 (en) Switching control method for audio output channel, and display device
CN113064515B (en) Touch display device and USB device switching method
CN117806586A (en) Display device and data processing method
CN115567748A (en) Display device and Bluetooth pairing control method
CN114040341B (en) Bluetooth broadcast packet reporting processing method and display device
CN113542852A (en) Display device and control method for fast pairing with external device
CN114302199A (en) Display apparatus and data sharing method
CN117651172A (en) Display device and service control method
CN112835633B (en) Display device and control method of display language
CN111913755B (en) Application scanning method and display device
CN112231088B (en) Browser process optimization method and display device
CN113076042B (en) Local media resource access method and display device
CN117768697A (en) Screen-throwing control method and display device
CN113971049B (en) Background service management method and display device
CN112199612B (en) Bookmark adding and combining method and display equipment
CN112087651B (en) Method for displaying inquiry information and smart television
CN115278322B (en) Display device, control device, and control method for display device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination