CN113852763A - Audio and video processing method and device, electronic equipment and storage medium - Google Patents

Audio and video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113852763A
CN113852763A CN202111165083.6A CN202111165083A CN113852763A CN 113852763 A CN113852763 A CN 113852763A CN 202111165083 A CN202111165083 A CN 202111165083A CN 113852763 A CN113852763 A CN 113852763A
Authority
CN
China
Prior art keywords
target
camera
video stream
acquisition
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111165083.6A
Other languages
Chinese (zh)
Other versions
CN113852763B (en
Inventor
杨志刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Xuanxian Technology Co ltd
Original Assignee
Shanghai Xuanxian Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Xuanxian Technology Co ltd filed Critical Shanghai Xuanxian Technology Co ltd
Priority to CN202111165083.6A priority Critical patent/CN113852763B/en
Publication of CN113852763A publication Critical patent/CN113852763A/en
Application granted granted Critical
Publication of CN113852763B publication Critical patent/CN113852763B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources

Abstract

The application provides an audio and video processing method, an audio and video processing device, electronic equipment and a storage medium, wherein the method comprises the following steps: and responding to the user triggering operation, calling a first interface in the acquisition program library to obtain an identifier of at least one camera connected with the user terminal and a corresponding candidate acquisition attribute, displaying the identifier, returning a target camera selected by the user and the corresponding target acquisition attribute to a second interface in the acquisition program library, and receiving and displaying a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute. Therefore, a user can select a target camera of at least one camera according to the requirement to acquire images, the switching of the cameras is realized, and the requirement for switching the cameras is met.

Description

Audio and video processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of real-time audio and video, and in particular, to an audio and video processing method and apparatus, an electronic device, and a storage medium.
Background
In the field of real-time audio and video, two main technical directions are provided, wherein a real-time communication server side is responsible for transmission and distribution of audio and video, and a real-time communication client side is responsible for acquisition, rendering, encoding and decoding and audio and video processing of audio and video data. At present, most of real-time communication clients are secondarily developed based on a webpage real-time communication function, and video acquisition and rendering use a module with the webpage real-time communication function. Therefore, the real-time communication client in the related art can provide a single acquisition function.
Disclosure of Invention
The application provides an audio and video processing method and device, electronic equipment and a storage medium.
An embodiment of a first aspect of the present application provides an audio and video processing method, including the following steps: responding to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute; displaying the identification of each camera and the corresponding candidate acquisition attribute so as to determine a target camera selected by a user and the corresponding target acquisition attribute; returning the target acquisition attribute of the target camera to a second interface in the acquisition program library; and receiving and displaying a target video stream which is acquired by the second interface from the target camera and matched with the target acquisition attribute.
As a first possible implementation manner of the embodiment of the first aspect of the present application, the receiving and displaying a target video stream, which is acquired by the second interface from the target camera and matched with the target acquisition attribute, includes: receiving a target video stream which is acquired by the second interface from the target camera and matched with the target acquisition attribute; and calling a renderer in a rendering program library to render the target video stream for display.
As a second possible implementation manner of the embodiment of the first aspect of the present application, after receiving the target video stream, which is acquired by the second interface from the target camera and matched with the target acquisition attribute, the method further includes: backing up the target video stream; and sending the backed-up target video stream to a target client through a server.
As a third possible implementation manner of the embodiment of the first aspect of the present application, the invoking a renderer in a rendering library to render the target video stream for display includes: acquiring a configured rendering element set, wherein the rendering element set comprises a plurality of rendering mappings, and the rendering mappings are mappings of window handles to pointers of the renderer; inquiring the rendering element set according to a target window handle of a window to be displayed of the target video stream so as to determine a pointer of a corresponding target renderer; and calling the target renderer in the rendering program library to render the target video stream for display through the pointer of the target renderer.
As a fourth possible implementation manner of the embodiment of the first aspect of the present application, before the invoking a renderer in a rendering library to render the target video stream for display, the method further includes: and performing beauty treatment on the target video stream.
As a fifth possible implementation manner of the embodiment of the first aspect of the present application, the displaying the identifier of each camera and the corresponding candidate acquisition attribute to determine a target camera selected by a user and a corresponding target acquisition attribute includes: displaying the identification of each camera; in response to a first selection operation, determining the target camera from the cameras; displaying multiple candidate acquisition attributes of the target camera; and responding to a second selection operation, and determining the target acquisition attribute of the target camera from the plurality of candidate acquisition attributes of the target camera.
The application provides an audio and video processing method, which comprises the steps of responding to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute, displaying the identifier of each camera and the corresponding candidate acquisition attribute, returning a target camera selected by a determined user and the corresponding target acquisition attribute to a second interface in the acquisition program library, receiving a target video stream which is obtained by the second interface from the target camera and is matched with the target acquisition attribute, and displaying the target video stream. Therefore, a user can select a target camera of at least one camera according to the requirement to acquire images, the switching of the cameras is realized, and the requirement for switching the cameras is met.
An embodiment of a second aspect of the present application provides an audio and video processing apparatus, including the following apparatus: the response module is used for responding to the user trigger operation and calling a first interface in the acquisition program library to obtain the identification of at least one camera connected with the user terminal and the corresponding candidate acquisition attribute; the determining module is used for displaying the identification of each camera and the corresponding candidate acquisition attribute so as to determine a target camera selected by a user and the corresponding target acquisition attribute; the return module is used for returning the target acquisition attribute of the target camera to a second interface in the acquisition program library; and the display module is used for receiving and displaying the target video stream which is acquired by the second interface from the target camera and matched with the target acquisition attribute.
As a first possible implementation manner of the embodiment of the second aspect of the present application, the display module includes: the receiving unit is used for receiving a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute; and the rendering unit is used for calling a renderer in a rendering program library to render the target video stream for display.
As a second possible implementation manner of the embodiment of the second aspect of the present application, the display module further includes: the backup unit is used for backing up the target video stream; and the sending unit is used for sending the backed-up target video stream to a target client through a server.
As a third possible implementation manner of the embodiment of the second aspect of the present application, the rendering unit is specifically configured to: acquiring a configured rendering element set, wherein the rendering element set comprises a plurality of rendering mappings, and the rendering mappings are mappings of window handles to pointers of the renderer; inquiring the rendering element set according to a target window handle of a window to be displayed of the target video stream so as to determine a pointer of a corresponding target renderer; and calling the target renderer in the rendering program library to render the target video stream for display through the pointer of the target renderer.
As a fourth possible implementation manner of the embodiment of the second aspect of the present application, the display module further includes: and the beautifying unit is used for carrying out beautifying processing on the target video stream.
As a fifth possible implementation manner of the second aspect of the present application, the determining module is specifically configured to: displaying the identification of each camera; in response to a first selection operation, determining the target camera from the cameras; displaying multiple candidate acquisition attributes of the target camera; and responding to a second selection operation, and determining the target acquisition attribute of the target camera from the plurality of candidate acquisition attributes of the target camera.
The application provides an audio and video processing device, which responds to a user trigger operation, calls a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute, displays the identifier of each camera and the corresponding candidate acquisition attribute, returns a target camera selected by a determined user and the corresponding target acquisition attribute to a second interface in the acquisition program library, receives a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute, and displays the target video stream. Therefore, a user can select a target camera of at least one camera according to the requirement to acquire images, the switching of the cameras is realized, and the requirement for switching the cameras is met.
An embodiment of a third aspect of the present application provides an electronic device, including: the audio/video processing method comprises a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the processor executes the program to execute the audio/video processing method provided by the embodiment of the first aspect of the application.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program is configured to, when executed by a processor, implement an audio/video processing method as set forth in the foregoing first aspect of the present application.
An embodiment of a fifth aspect of the present application provides a computer program product, where the computer program product includes a computer program, and when the computer program is executed by a processor, the computer program implements the audio/video processing method provided in the foregoing embodiment of the first aspect of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an audio/video processing method according to an embodiment of the present application;
fig. 2 is an exemplary diagram of a custom video capture rendering DLL provided in the second embodiment of the present application;
fig. 3 is a schematic flowchart of an audio/video processing method provided in the third embodiment of the present application;
fig. 4 is a schematic structural diagram of an audio/video processing device according to a fourth embodiment of the present application;
fig. 5 is a schematic structural diagram of an audio/video processing device according to a fifth embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The audio and video processing method and apparatus according to the embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow diagram of an audio and video processing method provided in an embodiment of the present application.
The embodiment of the application is exemplified by configuring the audio and video processing method in an audio and video processing device, and the audio and video processing device can be applied to any electronic equipment so that the electronic equipment can execute an audio and video processing function.
The electronic device may be any device having a computing capability, for example, a Personal Computer (PC), a mobile terminal, a server, and the like, and the mobile terminal may be a hardware device having various operating systems, touch screens, and/or display screens, such as a mobile phone, a tablet Computer, a Personal digital assistant, and a wearable device.
As shown in fig. 1, the audio/video processing method may include the steps of:
step 101, responding to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected to a user terminal and a corresponding candidate acquisition attribute.
In some embodiments, after the audio/video processing device receives a user trigger operation request, a first interface in the acquisition program library is called, interface parameter information of the first interface can be acquired, and analysis is performed according to the interface parameter information, so as to obtain an identifier of at least one camera connected to the user terminal and a corresponding candidate acquisition attribute. The candidate acquisition attributes may include, but are not limited to, acquisition resolution, acquisition frame rate, and acquisition pixel format, among others.
And 102, displaying the identification of each camera and the corresponding candidate acquisition attribute so as to determine a target camera selected by a user and the corresponding target acquisition attribute.
In some embodiments, in order to meet the requirement of personalized selection of cameras by a user, after acquiring identifiers of a plurality of cameras connected to a plurality of user terminals and corresponding candidate acquisition attributes, displaying the identifiers of the cameras, and determining a target camera from the cameras in response to a first selection operation; and displaying the multiple candidate acquisition attributes of the target camera, responding to the second selection operation, and determining the target acquisition attributes of the target camera from the multiple candidate acquisition attributes of the target camera so as to accurately determine the target camera and the corresponding target acquisition attributes from the complicated and changeable identification of the camera and the corresponding candidate acquisition attributes.
In some embodiments, the candidate acquisition attributes include, but are not limited to, an acquisition resolution, an acquisition frame rate, and an acquired pixel format, and the acquisition resolution is taken as an example for illustration, for example, the resolution supported by each external camera is different, and generally supports multiple resolutions, and after one of the external cameras is randomly selected, the software interface lists the multiple resolutions supported by the camera and selects one of the resolutions. The appropriate resolution can be selected according to the network condition, generally 1280 × 720 can be selected, and after one resolution is selected, the camera acquires the picture according to the size.
Wherein, the higher the resolution, the clearer the picture, and the better the corresponding picture quality.
And 103, returning the target acquisition attribute of the target camera to a second interface in the acquisition program library.
In the embodiment of the application, after the target acquisition attribute of the target camera is selected to meet the acquisition resolution, the acquisition frame rate and the acquired pixel format supported by user hardware, the target acquisition attribute of the target camera meeting the user hardware requirement is returned to the second interface in the acquisition program library.
And 104, receiving and displaying a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute.
In the embodiment of the present application, an implementation manner of receiving and displaying a target video stream, which is acquired by a second interface from a target camera and is matched with a target acquisition attribute, is to receive the target video stream, which is acquired by the second interface from the target camera and is matched with the target acquisition attribute, and acquire a configured rendering element set, where the rendering element set includes a plurality of rendering mappings, each rendering mapping is a mapping from a window handle to a pointer of a renderer, query the rendering element set according to a target window handle of a window to be displayed of the target video stream to determine a pointer of a corresponding target renderer, and invoke a target renderer in a rendering program library to render the target video stream according to the pointer of the target renderer to display. Therefore, the target video stream collected by the target camera is accurately rendered on the corresponding display window, so that a user can conveniently view the target video stream collected by the target camera on the display window.
In other embodiments of the present application, in order to meet a beauty requirement in a video process, after receiving a target video stream that is acquired by the second interface from the target camera and matches with the target acquisition attribute, the target video stream may be subjected to beauty processing, and then a renderer in the rendering library is invoked to render the target video stream for presentation.
In some embodiments, in order to meet the requirement of setting the beauty effect individually, the present embodiment may further receive a beauty setting request, determine a beauty parameter according to the beauty setting request, and perform beauty processing on the target video stream according to the beauty parameter.
The application provides an audio and video processing method, which comprises the steps of responding to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute, displaying the identifier of each camera and the corresponding candidate acquisition attribute, returning a target camera selected by a determined user and the corresponding target acquisition attribute to a second interface in the acquisition program library, receiving a target video stream which is obtained by the second interface from the target camera and is matched with the target acquisition attribute, and displaying the target video stream. Therefore, a user can select a target camera of at least one camera according to the requirement to acquire images, the switching of the cameras is realized, and the requirement for switching the cameras is met.
Based on the above embodiment, as shown in fig. 2, the embodiment packages a collection library (DLL) to support switching of multiple external cameras and to support switching of collection attributes; and calling a renderer in a rendering program library to render the target video stream so as to display the rendered target video stream.
In order to make those skilled in the art clearly understand the present application, this embodiment provides another audio and video processing method, and fig. 3 is a schematic flow diagram of an audio and video processing method provided in the second embodiment of the present application.
As shown in fig. 3, the audio/video processing method may include the steps of:
step 301, in response to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected to a user terminal and a corresponding candidate acquisition attribute.
Step 302, displaying the identification of each camera and the corresponding candidate acquisition attribute to determine the target camera selected by the user and the corresponding target acquisition attribute.
Step 303, returning the target collection attribute of the target camera to the second interface in the collection program library.
It is to be understood that, for the specific implementation of step 301 to step 303, reference may be made to the relevant description in the foregoing embodiments, and details are not described herein again.
And step 304, receiving a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute.
In this embodiment of the application, after receiving a target video stream, which is acquired by the second interface from the target camera and matches the target acquisition attribute, the target video stream may be subjected to a beautifying process, and the beautified target video stream is input into a renderer in a rendering library to render the target video stream for presentation.
In an embodiment of the application, after receiving a target video stream which is acquired by a second interface from a target camera and matched with a target acquisition attribute, the target video stream can be backed up, and the backed-up target video stream is sent to a target client through a server, so that the target client can also see a local picture.
Step 305, calling a renderer in a rendering program library to render the target video stream for display.
In the embodiment of the present application, an implementation manner of calling a renderer in a rendering library to render a target video stream for display is that a rendering element set configured in the rendering library is obtained, the rendering element set is queried according to a target window handle of a window to be displayed of the target video stream to determine a pointer of a corresponding target renderer, and the target renderer in the rendering library is called to render the target video stream for display through the pointer of the target renderer.
The rendering element set comprises a plurality of rendering mappings, and the rendering mappings are mappings of window handles to pointers of a renderer.
The application provides an audio and video processing method, which comprises the steps of responding to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute, displaying the identifier of each camera and the corresponding candidate acquisition attribute, returning a target camera selected by a user and the corresponding target acquisition attribute to a second interface in the acquisition program library, receiving a target video stream which is obtained by the second interface from the target camera and is matched with the target acquisition attribute, and calling a renderer in a rendering program library to render the target video stream for display. Therefore, a user can select a target camera of at least one camera according to requirements to acquire images, the switching of the cameras is realized, the target video stream is subjected to customized rendering, and the rendering performance of the rendering module is improved.
In order to implement the above embodiments, the present application further provides an audio/video processing device.
Fig. 4 is a schematic structural diagram of an audio/video processing device according to an embodiment of the present application.
As shown in fig. 4, the av processing device 400 may include: a response module 401, a determination module 402, a return module 403, and a presentation module 404.
The response module 401 is configured to invoke a first interface in the acquisition program library in response to a user trigger operation, so as to obtain an identifier of at least one camera connected to the user terminal and a corresponding candidate acquisition attribute.
A determining module 402, configured to display the identifier of each camera and the corresponding candidate acquisition attribute, so as to determine a target camera selected by the user and the corresponding target acquisition attribute.
And a returning module 403, configured to return the target acquisition attribute of the target camera to the second interface in the acquisition library.
And the display module 404 is configured to receive and display a target video stream, which is acquired by the second interface from the target camera and matches the target acquisition attribute.
The application provides an audio and video processing device, which responds to a user trigger operation, calls a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute, displays the identifier of each camera and the corresponding candidate acquisition attribute, returns a target camera selected by a determined user and the corresponding target acquisition attribute to a second interface in the acquisition program library, receives a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute, and displays the target video stream. Therefore, a user can select a target camera of at least one camera according to the requirement to acquire images, the switching of the cameras is realized, and the requirement for switching the cameras is met.
As a possible implementation manner of the embodiment of the present application, as shown in fig. 5, the display module 404 includes:
the receiving unit 4041 is configured to receive a target video stream, which is acquired by the second interface from the target camera and matches the target acquisition attribute.
The rendering unit 4042 is configured to invoke a renderer in the rendering library to render the target video stream for display.
As a possible implementation manner of the embodiment of the present application, as shown in fig. 5, the display module 404 further includes:
the backup unit 4043 is configured to backup the target video stream.
A sending unit 4044, configured to send the backed-up target video stream to the target client through the server.
As a possible implementation manner of the embodiment of the present application, the rendering unit 4042 is specifically configured to:
and acquiring a configured rendering element set, wherein the rendering element set comprises a plurality of rendering mappings, and the rendering mappings are mappings of window handles to pointers of a renderer.
And inquiring the rendering element set according to the target window handle of the window to be displayed of the target video stream so as to determine the pointer of the corresponding target renderer.
And calling the target renderer in the rendering program library to render the target video stream for display through the pointer of the target renderer.
As a possible implementation manner of the embodiment of the present application, as shown in fig. 5, the display module 404 further includes:
a beautifying unit 4045, configured to perform a beautifying process on the target video stream.
As a possible implementation manner of the embodiment of the present application, the determining module 402 is specifically configured to:
and displaying the identification of each camera.
In response to the first selection operation, a target camera is determined from the cameras.
And displaying various candidate acquisition attributes of the target camera.
And responding to the second selection operation, and determining the target acquisition attribute of the target camera from the plurality of candidate acquisition attributes of the target camera.
The application provides an audio and video processing device, which responds to a user trigger operation, calls a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute, displays the identifier of each camera and the corresponding candidate acquisition attribute, returns a target camera selected by a determined user and the corresponding target acquisition attribute to a second interface in the acquisition program library, receives a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute, and displays the target video stream. Therefore, a user can select a target camera of at least one camera according to the requirement to acquire images, the switching of the cameras is realized, and the requirement for switching the cameras is met.
In order to implement the foregoing embodiments, the present application also provides an electronic device, which may include: the audio and video processing method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the audio and video processing method as proposed by the previous embodiment of the application is executed.
In order to implement the foregoing embodiments, the present application also provides an electronic device, which may include: the audio and video processing method comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein when the processor executes the program, the audio and video processing method as proposed by the previous embodiment of the application is executed.
It should be noted that the foregoing explanation on the embodiment of the audio and video processing method is also applicable to the electronic device of the embodiment, and details are not described here.
In order to implement the foregoing embodiments, the present application further proposes a computer-readable storage medium on which a computer program is stored, wherein the program, when executed by a processor, implements an audio/video processing method as proposed in any of the foregoing embodiments of the present application.
In order to implement the foregoing embodiments, the present application further provides a computer program product, where the computer program product includes a computer program, and when the computer program is executed by a processor, the computer program implements the audio/video processing method proposed in any of the foregoing embodiments of the present application.
Fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device may include:
a memory 601, a processor 602, and a computer program stored on the memory 601 and executable on the processor 602.
The processor 602, when executing the program, implements the audio-video processing method provided in any of the illustrated embodiments of fig. 1 and 3.
Further, the electronic device may further include:
a communication interface 603 for communication between the memory 601 and the processor 602.
The memory 601 is used for storing computer programs that can be run on the processor 602.
Memory 601 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
And a processor 602, configured to implement the audio/video processing method according to any one of the embodiments shown in fig. 1 and fig. 3 when executing the program.
If the memory 601, the processor 602 and the communication interface 603 are implemented independently, the communication interface 603, the memory 601 and the processor 602 may be connected to each other through a bus and perform communication with each other. The bus may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 6, but this is not intended to represent only one bus or type of bus.
Optionally, in a specific implementation, if the memory 601, the processor 602, and the communication interface 603 are integrated on a chip, the memory 601, the processor 602, and the communication interface 603 may complete mutual communication through an internal interface.
Processor 602 may be a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to implement embodiments of the present invention.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing steps of a custom logic function or process, and alternate implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
The logic and/or steps represented in the flowcharts or otherwise described herein, e.g., an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. If implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An audio/video processing method is characterized by comprising the following steps:
responding to a user trigger operation, calling a first interface in an acquisition program library to obtain an identifier of at least one camera connected with a user terminal and a corresponding candidate acquisition attribute;
displaying the identification of each camera and the corresponding candidate acquisition attribute so as to determine a target camera selected by a user and the corresponding target acquisition attribute;
returning the target acquisition attribute of the target camera to a second interface in the acquisition program library;
and receiving and displaying a target video stream which is acquired by the second interface from the target camera and matched with the target acquisition attribute.
2. The method of claim 1, wherein the receiving and presenting the target video stream that is obtained by the second interface from the target camera and matches the target capture attribute comprises:
receiving a target video stream which is acquired by the second interface from the target camera and matched with the target acquisition attribute;
and calling a renderer in a rendering program library to render the target video stream for display.
3. The method of claim 2, wherein after receiving the target video stream acquired by the second interface from the target camera and matching the target acquisition attribute, further comprising:
backing up the target video stream;
and sending the backed-up target video stream to a target client through a server.
4. The method of claim 2, wherein the invoking a renderer in a rendering library to render the target video stream for presentation comprises:
acquiring a configured rendering element set, wherein the rendering element set comprises a plurality of rendering mappings, and the rendering mappings are mappings of window handles to pointers of the renderer;
inquiring the rendering element set according to a target window handle of a window to be displayed of the target video stream so as to determine a pointer of a corresponding target renderer;
and calling the target renderer in the rendering program library to render the target video stream for display through the pointer of the target renderer.
5. The method of claim 2, wherein before the invoking the renderer in the rendering library to render the target video stream for presentation, further comprises:
and performing beauty treatment on the target video stream.
6. The method of any one of claims 1-5, wherein said presenting an identification of each of said cameras and corresponding candidate acquisition attributes to determine a user selected target camera and corresponding target acquisition attributes comprises:
displaying the identification of each camera;
in response to a first selection operation, determining the target camera from the cameras;
displaying multiple candidate acquisition attributes of the target camera;
and responding to a second selection operation, and determining the target acquisition attribute of the target camera from the plurality of candidate acquisition attributes of the target camera.
7. An audio-video processing apparatus, characterized by comprising:
the response module is used for responding to the user trigger operation and calling a first interface in the acquisition program library to obtain the identification of at least one camera connected with the user terminal and the corresponding candidate acquisition attribute;
the determining module is used for displaying the identification of each camera and the corresponding candidate acquisition attribute so as to determine a target camera selected by a user and the corresponding target acquisition attribute;
the return module is used for returning the target acquisition attribute of the target camera to a second interface in the acquisition program library;
and the display module is used for receiving and displaying the target video stream which is acquired by the second interface from the target camera and matched with the target acquisition attribute.
8. The apparatus of claim 7, wherein the display module comprises:
the receiving unit is used for receiving a target video stream which is acquired by the second interface from the target camera and is matched with the target acquisition attribute;
and the rendering unit is used for calling a renderer in a rendering program library to render the target video stream for display.
9. An electronic device, characterized in that the electronic device comprises: memory, processor and computer program stored on the memory and executable on the processor, which when executing the program performs the audio-video processing method as claimed in any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when executed by a processor, implements the audio-video processing method according to any one of claims 1 to 6.
CN202111165083.6A 2021-09-30 2021-09-30 Audio and video processing method and device, electronic equipment and storage medium Active CN113852763B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111165083.6A CN113852763B (en) 2021-09-30 2021-09-30 Audio and video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111165083.6A CN113852763B (en) 2021-09-30 2021-09-30 Audio and video processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113852763A true CN113852763A (en) 2021-12-28
CN113852763B CN113852763B (en) 2023-12-12

Family

ID=78977452

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111165083.6A Active CN113852763B (en) 2021-09-30 2021-09-30 Audio and video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113852763B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134509A1 (en) * 2022-01-12 2023-07-20 北京字节跳动网络技术有限公司 Video stream pushing method and apparatus, and terminal device and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674061A (en) * 2019-09-24 2020-01-10 上海商汤临港智能科技有限公司 Control method and device of machine equipment and storage medium
CN112995562A (en) * 2019-12-13 2021-06-18 南京酷派软件技术有限公司 Camera calling method and device, storage medium and terminal

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110674061A (en) * 2019-09-24 2020-01-10 上海商汤临港智能科技有限公司 Control method and device of machine equipment and storage medium
CN112995562A (en) * 2019-12-13 2021-06-18 南京酷派软件技术有限公司 Camera calling method and device, storage medium and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134509A1 (en) * 2022-01-12 2023-07-20 北京字节跳动网络技术有限公司 Video stream pushing method and apparatus, and terminal device and storage medium

Also Published As

Publication number Publication date
CN113852763B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
CN107172346B (en) Virtualization method and mobile terminal
CN111314646B (en) Image acquisition method, image acquisition device, terminal device and readable storage medium
CN107295352B (en) Video compression method, device, equipment and storage medium
CN108228293B (en) Interface skin switching method and device
CN111163345A (en) Image rendering method and device
CN110650247A (en) Method for customizing on/off animation, intelligent terminal and storage medium
CN106469183A (en) Page rendering method and device, page data processing method and client
CN110968391A (en) Screenshot method, screenshot device, terminal equipment and storage medium
CN113852763B (en) Audio and video processing method and device, electronic equipment and storage medium
CN113014993B (en) Picture display method, device, equipment and storage medium
CN111796896A (en) Theme switching method of application page and related equipment
CN110188782B (en) Image similarity determining method and device, electronic equipment and readable storage medium
CN114021016A (en) Data recommendation method, device, equipment and storage medium
CN110290201B (en) Picture acquisition method, mobile terminal, server and storage medium
CN109857907B (en) Video positioning method and device
CN113220446A (en) Image or video data processing method and terminal equipment
CN112084386A (en) Cloud hosting client information management method and device and server
CN111061414A (en) Skin replacement method and device, electronic equipment and readable storage medium
CN109104608B (en) Television performance test method, equipment and computer readable storage medium
CN110287431B (en) Image file loading method and device, electronic equipment and storage medium
CN111221444A (en) Split screen special effect processing method and device, electronic equipment and storage medium
CN112988810B (en) Information searching method, device and equipment
CN114387402A (en) Virtual reality scene display method and device, electronic equipment and readable storage medium
CN114125531A (en) Video preview method, device, terminal and storage medium
CN109269628B (en) Method for monitoring motor vibration, terminal device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant