CN114222073B - Video output method, video output device, electronic equipment and storage medium - Google Patents

Video output method, video output device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114222073B
CN114222073B CN202111514569.6A CN202111514569A CN114222073B CN 114222073 B CN114222073 B CN 114222073B CN 202111514569 A CN202111514569 A CN 202111514569A CN 114222073 B CN114222073 B CN 114222073B
Authority
CN
China
Prior art keywords
video
output
interface
video output
information list
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111514569.6A
Other languages
Chinese (zh)
Other versions
CN114222073A (en
Inventor
戴宁
姜俊
魏力新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111514569.6A priority Critical patent/CN114222073B/en
Publication of CN114222073A publication Critical patent/CN114222073A/en
Application granted granted Critical
Publication of CN114222073B publication Critical patent/CN114222073B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0117Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0127Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level by changing the field or frame frequency of the incoming video signal, e.g. frame rate converter

Abstract

The present disclosure provides a video output method, an apparatus, an electronic device and a storage medium, which relate to the field of computer technologies, and in particular to artificial intelligence technologies such as a voice and video processing technology. The specific implementation scheme is as follows: acquiring a target video; performing corresponding processing on the target video in a hardware acceleration mode according to a pre-acquired output information list to generate at least one path of output video, wherein the output information list comprises an identifier of at least one video output interface and parameters required by the corresponding video; and distributing the at least one output video to the corresponding video output interface.

Description

Video output method, video output device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an Artificial Intelligence (AI) technology such as a voice and video processing technology, and in particular, to a video output method and apparatus, an electronic device, and a storage medium.
Background
With the continuous and deep application of artificial intelligence technology in the field of video transmission such as video conferencing, video data acquired by a camera often needs to be input as a plurality of different modules, such as a face detection module, a communication module, and a User Interface (UI) display module. In general, parameters such as resolution and frame rate required by different modules may be different.
In the prior art, video data is usually used in each module in a serial manner or in different modules by data copying. Both of these approaches can increase latency or memory usage, thus loading the system and affecting performance.
Disclosure of Invention
A video output method, apparatus, electronic device, and storage medium are provided.
According to a first aspect, there is provided a video output method, the method comprising: acquiring a target video; performing corresponding processing on a target video by adopting a hardware acceleration mode according to a pre-acquired output information list to generate at least one path of output video, wherein the output information list comprises an identifier of at least one video output interface and corresponding parameters required by the video; and distributing at least one path of output video to the corresponding video output interface.
According to a second aspect, there is provided a video output apparatus comprising: an acquisition unit configured to acquire a target video; the processing unit is configured to perform corresponding processing on a target video in a hardware acceleration mode according to a pre-acquired output information list to generate at least one path of output video, wherein the output information list comprises an identifier of at least one video output interface and corresponding parameters required by the video; a distribution unit configured to distribute at least one output video to a respective video output interface.
According to a third aspect, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described in any implementation manner of the first aspect.
According to a fourth aspect, there is provided a non-transitory computer-readable storage medium having stored thereon computer instructions for enabling a computer to perform the method as described in any implementation of the first aspect.
According to the technology disclosed by the invention, the acquired target video is processed in a hardware acceleration mode to generate at least one path of output video matched with the parameters required by the video in the pre-acquired output information list, and the generated output video is further distributed to the corresponding video output interface. Therefore, the consumption of resources such as a CPU (central processing unit), a memory and the like can be effectively reduced when multi-channel video output is processed in a hardware acceleration mode, and the system load is reduced.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram of an application scenario in which a video output method of an embodiment of the present disclosure may be implemented;
FIG. 4 is a schematic diagram of a video output device according to an embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing a video output method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of embodiments of the present disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram 100 illustrating a first embodiment according to the present disclosure. The video output method comprises the following steps:
s101, acquiring a target video.
In the present embodiment, the execution subject of the video output method can acquire the target video in various ways. The target video may be, for example, a video shot by a camera connected to the communication device in real time.
And S102, performing corresponding processing on the target video in a hardware acceleration mode according to the pre-acquired output information list to generate at least one path of output video.
In this embodiment, according to the output information list acquired in advance, the execution main body may perform corresponding processing on the target video acquired in step S101 by using various hardware acceleration methods, so as to generate at least one output video method. The output information list may include an identifier of at least one video output interface and corresponding parameters required by the video. The parameters required by the video may be parameters describing the video that can be transmitted by the video output interface, such as resolution, frame rate, and the like. As an example, the output information list may include parameters required for videos corresponding to the video output interface a and the video output interface a, and parameters required for videos corresponding to the video output interface B and the video output interface B.
In this embodiment, as an example, the execution main body may process the target video using a GPU (graphics Processing Unit) of a graphics card, so that the target video is processed to conform to the video required parameters in the output information list.
S103, at least one path of output video is distributed to the corresponding video output interface.
In this embodiment, the execution subject may distribute at least one output video generated in step S102 to a corresponding video output interface in various ways. As an example, the execution subject may send an output video meeting the video required parameters corresponding to the video output interface a, and send an output video meeting the video required parameters corresponding to the video output interface B.
In the method provided by the above embodiment of the present disclosure, the obtained target video is processed in a hardware acceleration manner, so as to generate at least one output video matched with parameters required by videos in the output information list obtained in advance, and further distribute the generated output video to the corresponding video output interface. Therefore, the consumption of resources such as a CPU (central processing unit), a memory and the like can be effectively reduced when multi-channel video output is processed in a hardware acceleration mode, and the system load is reduced.
In some optional implementation manners of this embodiment, the execution body may process the target video through an OpenGL ES (Open Graphics Library for Embedded Systems) interface, and generate at least one output video matched with parameters required by each video.
In these implementations, the execution main body performs various processing on the obtained target video by using an OpenGL ES interface created in advance, so as to generate at least one output video matching parameters required by each video.
Based on the optional implementation mode, the scheme provides a multi-channel video output mode based on OpenGL, so that the GPU can be directly controlled based on OpenGL, the workload of secondary development is reduced, and the efficiency of video output is improved.
In some optional implementations of this embodiment, based on the optional implementations, in response to determining that the identifier of the OpenGL ES interface exists in an output list of a communicatively connected camera, the execution subject may obtain the target video from the camera.
In these implementations, the execution body may first determine whether an identification of an OpenGL ES interface is present in an output list of the camera. Generally, the execution agent may create an OpenGL ES surface in advance. Then, the execution subject may register the surface in an output list of the camera after the camera is turned on, so as to receive data captured by the camera. Accordingly, in response to determining that the identifier of the OpenGL ES interface exists in the output list of the camera, the execution subject may acquire the target video from the camera.
Based on the optional implementation mode, the video data can be selectively transmitted to the OpenGL ES surface according to the output list of the camera, and the flexibility of video transmission is improved.
Optionally, based on the optional implementation manner, the execution main body may bridge a target video shot by a camera to an OpenGL ES interface.
In these implementation manners, when the execution main body runs in an android system, the execution main body may send the target video captured by the camera to the OpenGL ES interface by using a surface interface and a bridging technology of the android.
Based on the optional implementation mode, the android surface interface and the OpenGL ES interface can be combined to further improve the hardware acceleration effect.
In some optional implementation manners of this embodiment, based on the optional implementation manner, the output information list may be generated through the following steps:
in the first step, in response to receiving the registration information sent by the video output interface, the output information list is updated according to the registration information.
In these implementations, in response to receiving the registration information sent by the video output interface, the execution body may update the output information list according to the registration information. The registration information may include an identifier of the video output interface and a corresponding parameter required by the video.
And secondly, in response to receiving logout information sent by the video output interface, updating an output information list according to the logout information.
In these implementations, in response to receiving the logging-off information sent by the video output interface, the execution main body may update the output information list according to the logging-off information. The logout information may include an identifier of the video output interface.
Based on the optional implementation manner, the processing of the output video is performed according to the content in the output information list after the target video is acquired, which is equivalent to the fact that a transfer exists after the target video is acquired, so that the video is distributed according to the content in the output information list, and therefore the real-time registration or cancellation of the interface during the operation is realized. Compared with the prior art that the camera needs to be restarted every time the video output interface is plugged and pulled out, the blockage is reduced, the fluency of video transmission is improved, and the user experience is optimized.
With continued reference to fig. 2, fig. 2 is a schematic diagram 200 according to a second embodiment of the present disclosure. The video output method comprises the following steps:
s201, acquiring a target video.
And S2021, responding to the received cutting coordinate sent by the face detection interface, and sending the cutting coordinate to the focusing portrait interface.
In this embodiment, in response to receiving the cropping coordinates sent by the face detection interface, the execution subject of the video output method may send the cropping coordinates to the focused portrait interface in various ways. The cutting coordinate may be generated based on whether the position of the detected face meets a preset cutting condition.
As an example, the face detection interface may be used to connect modules for implementing a face detection function, so that position coordinates of a face may be acquired. The module for realizing the face detection function can determine whether the position indicated by the module meets the preset cutting condition according to the position coordinates of the face. As an example, the clipping condition may be that the position deviation degree is greater than a first preset threshold (for example, the face position is not between two trisection points on the screen), and the clipping condition may also be that the number of pixels included in the face position is less than a second preset threshold. In response to determining that the predetermined cropping condition is satisfied, the module for performing the face detection function may generate cropping coordinates to achieve a goal (e.g., face centering or zooming-in) that matches the cropping condition. The face detection interface may obtain the cutting coordinates from a module for implementing a face detection function. In response to receiving the cropping coordinates sent by the face detection interface, the execution subject may send the cropping coordinates to a focused portrait interface. The focusing portrait interface can be used for connecting modules for realizing the face focusing function, such as transmitting face close-ups.
And S2022, updating the output information list according to the cutting coordinate and the identification of the focusing portrait interface.
In this embodiment, the execution body may update the output information list in various ways according to the clipping coordinates and the identification of the focused portrait interface received in step S2021. As an example, the execution body may store the received clipping coordinate and the identifier of the focused portrait interface in the output information list in association with each other, so as to instruct an interface to which the identifier of the focused portrait interface indicates a video to be clipped from the video image according to the clipping coordinate.
And S203, performing corresponding processing on the target video by adopting a hardware acceleration mode according to the pre-acquired output information list to generate at least one path of output video.
In this embodiment, the output information list may include an identifier of at least one video output interface and a corresponding video required parameter. The required parameters for the video may include cropping coordinates. The above-described cropping coordinates may be used to indicate the location of the image cropping. The video output interface may include a face detection interface and a focused portrait interface.
In this embodiment, as an example, the execution body may perform, in various hardware-accelerated manners, a cropping process on the target video according to a position indicated by the cropping coordinate to generate an output video for sending to the focused portrait interface.
S204, at least one path of output video is distributed to the corresponding video output interface.
S201, S203, and S204 may respectively correspond to S101, S102, S103 and their optional implementations in the foregoing embodiments, and the above description on S101, S102, S103 and their optional implementations also applies to S201, S203, and S204, which is not described herein again.
As can be seen from fig. 2, the flow 200 of the video output method in this embodiment embodies the steps of updating the output information list according to the cropping coordinates generated based on the position of the detected face and the identification of the focused human image interface, and distributing the output video processed according to the output information list. Therefore, the scheme described in this embodiment can implement processing such as cropping the video including the face image, so as to generate a multi-channel video satisfying various requirements (such as character centering, close-up, and the like), and does not need to occupy too many resources such as a CPU, a memory, and the like, thereby reducing the performance requirement on the device.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of a video output method according to an embodiment of the present disclosure. In the application scenario of fig. 3, a user 301 uses a terminal device 302 to conduct a video call. The terminal apparatus 302 acquires the captured video 303 from the mounted camera 3021. The terminal device 302 performs corresponding processing on the video 303 by using a hardware acceleration method according to the output information list 304 acquired in advance, and generates output videos 3051, 3052, 3053, and the like. Among them, the output video 3051 is subjected to resolution reduction processing, the output video 3052 is subjected to frame rate reduction processing, and the output video 3053 is subjected to portrait centering clipping processing. Then, the terminal device 302 may send the output videos 3051, 3052, and 3053 to corresponding video output interfaces (e.g., interface a, interface B, and interface N), respectively, so that the video output interfaces implement functions such as face detection, UI display, video transmission, and the like.
Currently, one of the prior art generally uses video data in serial fashion in individual modules or in different modules by data copying. Both of these approaches can increase latency or memory usage, thus loading the system and affecting performance. In the method provided by the above embodiment of the present disclosure, the acquired target video is processed in a hardware acceleration manner to generate at least one output video matched with parameters required by the video in the output information list acquired in advance, and further distribute the generated output video to the corresponding video output interface. Therefore, the consumption of resources such as a CPU (central processing unit), a memory and the like can be effectively reduced when multi-channel video output is processed in a hardware acceleration mode, and the system load is reduced.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides an embodiment of a video output apparatus, which corresponds to the method embodiment shown in fig. 1 or fig. 2, and which is particularly applicable to various electronic devices.
As shown in fig. 4, the video output apparatus 400 provided by the present embodiment includes an acquisition unit 401, a processing unit 402, and a distribution unit 403. Wherein, the obtaining unit 401 is configured to obtain a target video; a processing unit 402, configured to perform corresponding processing on a target video in a hardware acceleration manner according to a pre-obtained output information list, so as to generate at least one output video, where the output information list may include an identifier of at least one video output interface and parameters required by a corresponding video; a distribution unit 403 configured to distribute at least one output video to a respective video output interface.
In the present embodiment, in the video output apparatus 400: the specific processing of the obtaining unit 401, the processing unit 402 and the distributing unit 403 and the technical effects brought by the specific processing can refer to the related descriptions of steps S101, S102 and S103 in the corresponding embodiment of fig. 1, which are not described herein again.
In some optional implementations of this embodiment, the processing unit 402 may be further configured to: and processing the target video through an OpenGL ES interface to generate at least one path of output video matched with the parameters required by each video.
In some optional implementations of this embodiment, the obtaining unit 401 may be further configured to: in response to determining that the identification of the OpenGL ES interface is present in the output list of communicatively connected cameras, a target video is obtained from the cameras.
In some optional implementations of this embodiment, the obtaining unit 401 may be further configured to: and bridging the target video shot by the camera to an OpenGL ES interface.
In some optional implementations of this embodiment, the video required parameter may include a cropping coordinate. The above-described cropping coordinates may be used to indicate the location of the image cropping. The video output interface may include a face detection interface and a focused portrait interface. The video output apparatus 400 further comprises an update unit (not shown in the figure) configured to: sending the cutting coordinate to a focusing portrait interface in response to receiving the cutting coordinate sent by the face detection interface, wherein the cutting coordinate can be generated based on whether the position of the detected face meets a preset cutting condition; and updating the output information list according to the cutting coordinate and the identification of the focusing portrait interface.
In some optional implementations of this embodiment, the update unit may be further configured to: in response to receiving registration information sent by the video output interface, updating an output information list according to the registration information, wherein the registration information can comprise an identifier of the video output interface and corresponding parameters required by the video; and in response to receiving logout information sent by the video output interface, updating an output information list according to the logout information, wherein the logout information can comprise the identification of the video output interface.
In the apparatus provided by the foregoing embodiment of the present disclosure, the processing unit processes the target video acquired by the acquisition unit in a hardware acceleration manner to generate at least one output video matched with parameters required by videos in the output information list acquired in advance, and further distributes the generated output video to corresponding video output interfaces through the distribution unit. Therefore, the consumption of resources such as a CPU (central processing unit), a memory and the like can be effectively reduced when multi-channel video output is processed in a hardware acceleration mode, and the system load is reduced.
In the technical scheme of the disclosure, the processes of collecting, storing, using, processing, transmitting, providing, disclosing and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the common customs of public order.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 5 illustrates a schematic block diagram of an example electronic device 500 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 comprises a computing unit 501 which may perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, the ROM 502, and the RAM 503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general-purpose and/or special-purpose processing components having processing and computing capabilities. Some examples of the computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and so forth. The calculation unit 501 performs the respective methods and processes described above, such as a video output method. For example, in some embodiments, the video output method may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into the RAM 503 and executed by the computing unit 501, one or more steps of the video output method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the video output method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server combining a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.

Claims (12)

1. A video output method is applied to terminal equipment and comprises the following steps:
acquiring a target video by using a camera of the terminal equipment, and sending the target video to a pre-established OpenGL ES interface;
performing corresponding processing on the target video by adopting a hardware acceleration mode according to a pre-acquired output information list to generate at least two paths of output videos, wherein the output information list comprises identifications of at least two video output interfaces and corresponding video required parameters, the video required parameters are used for describing parameters of videos transmitted by the video output interfaces, the at least two video output interfaces comprise video output interfaces respectively corresponding to different video required parameters, the at least two paths of output videos are matched with the video required parameters, and the at least two video output interfaces are used for respectively realizing different functions;
distributing the at least two paths of output videos to corresponding video output interfaces;
the method for generating at least two paths of output videos comprises the following steps of performing corresponding processing on the target video in a hardware acceleration mode according to a pre-acquired output information list, wherein the step of generating at least two paths of output videos comprises the following steps:
and processing the target video through the OpenGL ES interface to generate at least two paths of output videos matched with the parameters required by the target video.
2. The method of claim 1, wherein the obtaining a target video comprises:
in response to determining that the identification of the OpenGL ES interface exists in an output list of communicatively connected cameras, obtaining the target video from the cameras.
3. The method of claim 2, wherein the obtaining the target video comprises:
and bridging the target video shot by the camera to the OpenGL ES interface.
4. The method according to one of claims 1 to 3, wherein the video required parameters comprise cropping coordinates indicating a position of image cropping, and the video output interface comprises a face detection interface and a focused portrait interface; and
the output information list is generated by the following steps:
sending the cutting coordinate to the focusing portrait interface in response to receiving the cutting coordinate sent by the face detection interface, wherein the cutting coordinate is generated based on whether the position of the detected face meets a preset cutting condition;
and updating the output information list according to the cutting coordinate and the identification of the focusing portrait interface.
5. The method according to one of claims 1 to 3, wherein the output information list is generated by:
responding to received registration information sent by a video output interface, and updating the output information list according to the registration information, wherein the registration information comprises an identifier of the video output interface and corresponding video required parameters;
and responding to the received logout information sent by the video output interface, and updating the output information list according to the logout information, wherein the logout information comprises the identification of the video output interface.
6. A video output device applied to a terminal device includes:
an acquisition unit configured to acquire a target video by using a camera of the terminal device and to transmit the target video to a pre-created OpenGL ES interface;
the processing unit is configured to perform corresponding processing on the target video in a hardware acceleration mode according to a pre-acquired output information list to generate at least two paths of output videos, wherein the output information list comprises identifications of at least two video output interfaces and corresponding video required parameters, the video required parameters are used for describing video parameters transmitted by the video output interfaces, the at least two video output interfaces comprise video output interfaces respectively corresponding to different video required parameters, the at least two paths of output videos are matched with the video required parameters, and the at least two video output interfaces are used for respectively realizing different functions;
a distribution unit configured to distribute the at least two output videos to respective video output interfaces;
wherein the processing unit is further configured to: and processing the target video through the OpenGL ES interface to generate at least two paths of output videos matched with the parameters required by the target video.
7. The apparatus of claim 6, wherein the obtaining unit is further configured to:
in response to determining that the identification of the OpenGL ES interface exists in an output list of communicatively connected cameras, obtaining the target video from the cameras.
8. The apparatus of claim 7, wherein the obtaining unit is further configured to:
and bridging the target video shot by the camera to the OpenGL ES interface.
9. The apparatus according to one of claims 6 to 8, wherein the video required parameters comprise cropping coordinates indicating a position of image cropping, and the video output interface comprises a face detection interface and a focused human image interface; and
the apparatus further comprises an update unit configured to:
sending the cutting coordinate to the focusing portrait interface in response to receiving the cutting coordinate sent by the face detection interface, wherein the cutting coordinate is generated based on whether the position of the detected face meets a preset cutting condition;
and updating the output information list according to the cutting coordinate and the identification of the focusing portrait interface.
10. The apparatus of claim 9, wherein the update unit is further configured to:
responding to received registration information sent by a video output interface, and updating the output information list according to the registration information, wherein the registration information comprises an identifier of the video output interface and corresponding parameters required by a video;
and responding to the received logout information sent by the video output interface, and updating the output information list according to the logout information, wherein the logout information comprises the identification of the video output interface.
11. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
12. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-5.
CN202111514569.6A 2021-12-13 2021-12-13 Video output method, video output device, electronic equipment and storage medium Active CN114222073B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111514569.6A CN114222073B (en) 2021-12-13 2021-12-13 Video output method, video output device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111514569.6A CN114222073B (en) 2021-12-13 2021-12-13 Video output method, video output device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN114222073A CN114222073A (en) 2022-03-22
CN114222073B true CN114222073B (en) 2023-02-17

Family

ID=80701180

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111514569.6A Active CN114222073B (en) 2021-12-13 2021-12-13 Video output method, video output device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114222073B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853193A (en) * 2014-02-19 2015-08-19 腾讯科技(北京)有限公司 Video compression method, device and electronic equipment
CN107241591A (en) * 2017-06-30 2017-10-10 中国航空工业集团公司雷华电子技术研究所 A kind of embedded 3D video image display methods of airborne radar and system
CN108881916A (en) * 2018-06-21 2018-11-23 深圳市斯迈龙科技有限公司 The video optimized processing method and processing device of remote desktop
CN109600574A (en) * 2017-09-30 2019-04-09 上海宝信软件股份有限公司 It is a kind of based on hardware-accelerated mobile flow medium gateway system
CN112533075A (en) * 2020-11-24 2021-03-19 湖南傲英创视信息科技有限公司 Video processing method, device and system

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106713937A (en) * 2016-12-30 2017-05-24 广州虎牙信息科技有限公司 Video playing control method and device as well as terminal equipment
CN109963191A (en) * 2017-12-14 2019-07-02 中兴通讯股份有限公司 A kind of processing method of video information, device and storage medium
CN108235096A (en) * 2018-01-18 2018-06-29 湖南快乐阳光互动娱乐传媒有限公司 The mobile terminal hard decoder method that intelligently the soft decoding of switching plays video
CN110620954B (en) * 2018-06-20 2021-11-26 阿里巴巴(中国)有限公司 Video processing method, device and storage medium for hard solution
CN109495753A (en) * 2018-11-09 2019-03-19 建湖云飞数据科技有限公司 A kind of codec parameters configuration method
WO2021120086A1 (en) * 2019-12-19 2021-06-24 威创集团股份有限公司 Spliced wall image content recognition windowing display method and related device
US11544029B2 (en) * 2020-02-21 2023-01-03 Userful Corporation System and method for synchronized streaming of a video-wall
CN111882483B (en) * 2020-08-31 2024-04-09 北京百度网讯科技有限公司 Video rendering method and device
CN113515320A (en) * 2021-05-26 2021-10-19 新华三信息技术有限公司 Hardware acceleration processing method and device and server
CN113554721B (en) * 2021-07-23 2023-11-14 北京百度网讯科技有限公司 Image data format conversion method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104853193A (en) * 2014-02-19 2015-08-19 腾讯科技(北京)有限公司 Video compression method, device and electronic equipment
CN107241591A (en) * 2017-06-30 2017-10-10 中国航空工业集团公司雷华电子技术研究所 A kind of embedded 3D video image display methods of airborne radar and system
CN109600574A (en) * 2017-09-30 2019-04-09 上海宝信软件股份有限公司 It is a kind of based on hardware-accelerated mobile flow medium gateway system
CN108881916A (en) * 2018-06-21 2018-11-23 深圳市斯迈龙科技有限公司 The video optimized processing method and processing device of remote desktop
CN112533075A (en) * 2020-11-24 2021-03-19 湖南傲英创视信息科技有限公司 Video processing method, device and system

Also Published As

Publication number Publication date
CN114222073A (en) 2022-03-22

Similar Documents

Publication Publication Date Title
CN110059623B (en) Method and apparatus for generating information
CN113365146B (en) Method, apparatus, device, medium and article of manufacture for processing video
CN113325954B (en) Method, apparatus, device and medium for processing virtual object
CN113242358A (en) Audio data processing method, device and system, electronic equipment and storage medium
CN113037489B (en) Data processing method, device, equipment and storage medium
CN112688991B (en) Method for performing point cloud scanning operation, related apparatus and storage medium
CN113378855A (en) Method for processing multitask, related device and computer program product
CN114222073B (en) Video output method, video output device, electronic equipment and storage medium
CN114554110B (en) Video generation method, device, electronic equipment and storage medium
CN113033475B (en) Target object tracking method, related device and computer program product
CN114612212A (en) Business processing method, device and system based on risk control
CN114445682A (en) Method, device, electronic equipment, storage medium and product for training model
CN113556575A (en) Method, apparatus, device, medium and product for compressing data
CN113836455A (en) Special effect rendering method, device, equipment, storage medium and computer program product
CN113535020A (en) Method, apparatus, device, medium and product for generating application icons
CN112966607A (en) Model training method, face video generation method, device, equipment and medium
CN112541472B (en) Target detection method and device and electronic equipment
CN113473179B (en) Video processing method, device, electronic equipment and medium
CN115334321B (en) Method and device for acquiring access heat of video stream, electronic equipment and medium
CN113641428B (en) Method and device for acquiring special effect scene packet, electronic equipment and readable storage medium
CN116233051A (en) Page sharing method, device and equipment for applet and storage medium
CN115129488A (en) Streaming data processing method, device, equipment and storage medium
CN114051110A (en) Video generation method and device, electronic equipment and storage medium
CN114967928A (en) Screen sharing method and device, electronic equipment and medium
CN115761094A (en) Image rendering method, device and equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant