CN112135157B - Video content acquisition method and device, electronic equipment and storage medium - Google Patents

Video content acquisition method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112135157B
CN112135157B CN202010974663.9A CN202010974663A CN112135157B CN 112135157 B CN112135157 B CN 112135157B CN 202010974663 A CN202010974663 A CN 202010974663A CN 112135157 B CN112135157 B CN 112135157B
Authority
CN
China
Prior art keywords
image
real
memory space
video
time interactive
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010974663.9A
Other languages
Chinese (zh)
Other versions
CN112135157A (en
Inventor
周鑫
李锐
陶澍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Ivreal Technology Co ltd
Original Assignee
Chongqing Ivreal Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Ivreal Technology Co ltd filed Critical Chongqing Ivreal Technology Co ltd
Priority to CN202010974663.9A priority Critical patent/CN112135157B/en
Publication of CN112135157A publication Critical patent/CN112135157A/en
Application granted granted Critical
Publication of CN112135157B publication Critical patent/CN112135157B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Abstract

The invention provides a video content acquisition method, a video content acquisition device, electronic equipment and a storage medium. The method is applied to electronic equipment, the electronic equipment comprises an image processor and a central processing unit, the memory space of the image processor comprises a first display card memory space and a second display card memory space, and the method comprises the following steps: acquiring real-time interactive images among a plurality of users in the memory space of the first display card; importing the real-time interactive image into the memory space of the second display card; performing video image conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a video image; importing the video image and the audio information corresponding to the video image into a central processing unit; and synthesizing the video image and the audio information by using the central processing unit to obtain video content. The invention can reduce the load of the central processing unit.

Description

Video content acquisition method and device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a method and an apparatus for acquiring video content, an electronic device, and a storage medium.
Background
At present, with the development of the video live broadcast technology, two teaching parties (such as teachers and students) can perform video interaction in different places, but a third party (such as parents) cannot know the interaction process of the two teaching parties, and meanwhile, the processing of video images occupies more resources of a central processing unit, and the load is large.
Therefore, how to reduce the load of the central processing unit is a technical problem to be solved.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a video content acquiring method, device, electronic device and storage medium, which can acquire images interacted among multiple parties, and at the same time, reduce the load of a central processing unit.
A first aspect of the present invention provides a video content obtaining method, which is applied to an electronic device, where the electronic device includes an image processor and a central processing unit, a memory space of the image processor includes a first graphics card memory space and a second graphics card memory space, and the video content obtaining method includes:
acquiring real-time interactive images among a plurality of users in the memory space of the first display card;
importing the real-time interactive image into the memory space of the second display card;
performing video image conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a video image;
importing the video image and the audio information corresponding to the video image into a central processing unit;
and synthesizing the video image and the audio information by using the central processing unit to obtain video content.
In a possible implementation manner, the obtaining a real-time interactive image among a plurality of users in the memory space of the first display card includes:
creating an image display window;
and reading the data of the memory space of the first display card through the image display window so as to obtain a real-time interactive image among a plurality of users stored in the memory space of the first display card.
In a possible implementation manner, the obtaining the real-time interactive image among the plurality of users in the memory space of the first display card includes obtaining the real-time interactive image among the plurality of users in the memory space of the first display card
Determining the transaction state of a real-time interactive image among a plurality of users in the memory space of the first video card;
judging whether the transaction state is an unlocking state;
and if the transaction state of the real-time interactive image is the unlocking state, acquiring the real-time interactive image.
In a possible implementation manner, before the obtaining of the real-time interactive image among the plurality of users in the memory space of the first display card, the video content obtaining method further includes:
acquiring a first real-time image of a first user and acquiring a second real-time image of a second user;
generating the real-time interactive image according to the first real-time image, the second real-time image and a preset virtual scene;
and storing the real-time interactive image into the memory space of the first display card.
In a possible implementation manner, the performing, by using the image processor, video image conversion on the real-time interactive image in the memory space of the second video card to obtain a video image includes:
performing preset standard format conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a standard image;
adjusting the standard image into a target image with a preset size;
and converting the target image into a video image in a preset video format.
A second aspect of the present invention provides a video content acquisition apparatus comprising:
the acquisition module is used for acquiring real-time interactive images among a plurality of users in the memory space of the first display card;
the import module is used for importing the real-time interactive image into the memory space of the second display card;
the conversion module is used for performing video image conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a video image;
the import module is further used for importing the video image and the audio information corresponding to the video image into a central processing unit;
and the synthesis module is used for synthesizing the video image and the audio information by using the central processing unit to obtain video content.
A third aspect of the invention provides an electronic device comprising a processor and a memory, the processor being configured to implement the video content acquisition method when executing a computer program stored in the memory.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the video content acquisition method.
According to the technical scheme, the real-time interactive image can be collected, the image processor is used for processing the real-time interactive image, the video image obtained after the processing and the corresponding audio information are only required to be imported into the central processing unit for synthesis, and the video content is obtained.
Drawings
Fig. 1 is a flowchart illustrating a video content obtaining method according to a preferred embodiment of the present invention.
Fig. 2 is a functional block diagram of a video content acquiring apparatus according to a preferred embodiment of the present disclosure.
Fig. 3 is a schematic structural diagram of an electronic device implementing a video content obtaining method according to a preferred embodiment of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
The video content acquisition method of the embodiment of the invention is applied to the electronic equipment, and can also be applied to a hardware environment formed by the electronic equipment and a server connected with the electronic equipment through a network, and the server and the electronic equipment execute together. Networks include, but are not limited to: a wide area network, a metropolitan area network, or a local area network.
A server may refer to a computer system that provides services to other devices (e.g., electronic devices) in a network. A personal computer may also be called a server if it can externally provide a File Transfer Protocol (FTP) service. In a narrow sense, a server refers to a high-performance computer, which can provide services to the outside through a network, and has higher requirements on stability, security, performance and the like compared with a common personal computer, so that hardware such as a CPU, a chipset, a memory, a disk system, a network and the like is different from the common personal computer.
The electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware thereof includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like. The electronic device may also include a network device and/or a user device. The network device includes, but is not limited to, a single network device, a server group consisting of a plurality of network devices, or a Cloud Computing (Cloud Computing) based Cloud consisting of a large number of hosts or network devices, wherein the Cloud Computing is one of distributed Computing, and is a super virtual computer consisting of a group of loosely coupled computers. The user device includes, but is not limited to, any electronic product that can interact with a user through a keyboard, a mouse, a remote controller, a touch pad, or a voice control device, for example, a personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), or the like.
Referring to fig. 1, fig. 1 is a flowchart illustrating a video content obtaining method according to a preferred embodiment of the present invention. The order of the steps in the flowchart may be changed, and some steps may be omitted. Wherein, the execution subject of the video content acquisition method can be an electronic device.
And S11, acquiring real-time interactive images among a plurality of users in the memory space of the first display card.
The real-time interactive image may refer to an image where two or more users at different positions interact with each other, such as an interactive image of a teacher giving a remote lecture and an interactive image of a student giving a remote lecture. Generally, a teacher can see a real-time image of a student on a screen of an electronic device, and the student can see the real-time image of the teacher on the screen of the electronic device, but cannot obtain an image in which the teacher interacts with the student, and the real-time interactive image can be an image synthesized by technical means and in which the teacher interacts with the student.
According to the embodiment of the invention, the real-time interactive image among a plurality of users can be acquired.
As an optional implementation manner, before obtaining the real-time interactive image among the plurality of users in the memory space of the first display card, the method further includes:
acquiring a first real-time image of a first user and acquiring a second real-time image of a second user;
generating the real-time interactive image according to the first real-time image, the second real-time image and a preset virtual scene;
and storing the real-time interactive image into the memory space of the first display card.
In this optional embodiment, a first real-time image of the first user (for example, a real-time image of a side of the first user) may be acquired by the camera device of the first user, a second real-time image of the second user (for example, a real-time image of a side of the second user) may be acquired by the camera device of the second user, and the real-time interactive image may be acquired by fusing the first real-time image and the second real-time image into a preset Virtual scene by using a Virtual Reality (VR) technology and a real-time and Mixed Reality (MR) technology by using a Virtual Reality 4 engine. The real-time interactive image for each frame may be stored and updated into the first graphics card memory space (e.g., UTextureRenderTarget 2D).
The virtual reality can be realized by generating a simulated environment by using a computer, combining electronic signals generated by using data in real life and through computer technology with various output devices to convert the electronic signals into phenomena which can be felt by people, wherein the phenomena can be true and real objects in reality and can also be substances which can not be seen by the naked eyes, and the phenomena are expressed by a three-dimensional model.
The mixed reality may refer to overlaying real things into a virtual scene, virtualizing the real things (e.g., a picture captured by a camera), performing 3D modeling, and then blending into a virtual 3D scene.
Optionally, a real-time image of the front of the first user and a real-time image of the front of the second user may be captured by another photographing device, and the real-time image of the front of the second user is output on the device of the first user and the real-time image of the front of the first user is output on the device of the second user.
Specifically, the acquiring a real-time interactive image among a plurality of users in the memory space of the first video card includes:
creating an image display window;
and reading the data of the memory space of the first display card through the image display window so as to obtain a real-time interactive image among a plurality of users stored in the memory space of the first display card.
In this optional embodiment, a window (the image display window) may be created first to display the real-time interactive image, so that an operator may find a problem in time when processing the image, perform error analysis, and the like. And simultaneously, reading the data of the memory space of the first display card through the image display window so as to obtain a real-time interactive image among a plurality of users stored in the memory space of the first display card. The image display window plays a role of a type intermediate medium, the system can only place the real-time interactive image in the image display window without concerning other things, meanwhile, other programs only need to directly take the real-time interactive image from the image display window according to the window identification (window handle) without concerning other things, the interaction of logic judgment and the like with other programs is avoided, and the operation efficiency is improved.
The window identifier may be a Handle of a window, and the Handle (Handle) may be an identifier used to identify an object or an item, and may be used to describe a form, a file, or the like.
Specifically, the acquiring a real-time interactive image among a plurality of users in the memory space of the first video card includes:
determining the transaction state of a real-time interactive image among a plurality of users in the memory space of the first video card;
judging whether the transaction state is an unlocking state;
and if the transaction state of the real-time interactive image is the unlocking state, acquiring the real-time interactive image.
In this alternative embodiment, the real-time interactive image may be directly obtained from a storage space or a program in which the real-time interactive image is stored, but it is necessary to ensure that the real-time interactive image is not locked, and if the real-time interactive image is locked, it indicates that the real-time interactive image is occupied or processed and cannot be used by other programs. And if the transaction state of the real-time interactive image is the unlocking state, the real-time interactive image can be used, namely the real-time interactive image can be obtained.
And S12, importing the real-time interactive image into the memory space of the second display card.
In the real-time example of the present invention, the real-time interactive image may be imported into the memory space (ID3D11Texture2D) of the second graphics card, so that the subsequent processing of the real-time interactive image in the memory space of the second graphics card may be performed in an independent operating environment (independent context), which is not affected by the operation of other programs, and reduces the probability of error occurrence. The independent operation environment may include an independent program, an independent operation resource, and an independent data, that is, data processing in the independent operation environment is not affected by an external environment.
And S13, using the image processor to perform video image conversion on the real-time interactive image in the memory space of the second display card to obtain a video image.
The image processor (GPU) is a microprocessor dedicated to image and Graphics related operations on personal computers, workstations, game machines and some mobile devices (e.g. tablet computers, smart phones, etc.).
In the embodiment of the invention, the image processor is used for carrying out video image conversion on the real-time interactive image in the memory space of the second display card, and basically no central processing unit is required to participate, so that the load of the central processing unit is reduced, and the system performance is improved.
Specifically, the using the image processor to perform video image conversion on the real-time interactive image in the memory space of the second display card to obtain the video image includes:
performing preset standard format conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a standard image;
adjusting the standard image into a target image with a preset size;
and converting the target image into a video image in a preset video format.
In this optional implementation, according to a preset standard format (for example, RGBA format) of the memory space of the second graphics card, format conversion may be performed on the real-time interactive image to obtain a standard image, then the preset format image is adjusted to a target image with a preset size, and finally the target image is converted into a video image with a preset video format, for example, YUV420P format.
In the format YUV420P, YUV is a color space, "Y" represents brightness (Luma), and "U" and "V" represent chroma and concentration, and are completely sampled; YUV420P subsamples pixel points using a 2 x 2 matrix on the basis of full sampling (each pixel point samples individual YUV component information, including the most comprehensive YUV information), and 4 pixel points have individual Y components and share the same UV information, for a total of 6 bytes. One frame of image occupies less total space.
And S14, importing the video image and the audio information corresponding to the video image into a central processing unit.
The Central Processing Unit (CPU) may be a core component responsible for reading, decoding, and executing instructions.
And S15, synthesizing the video image and the audio information by using the central processing unit to obtain video content.
In this optional embodiment, after the video image and the audio information are imported to a central processing unit, the central processing unit performs calculation to generate final video content and distributes the final video content to a network.
In the method flow described in fig. 1, a real-time interactive image among a plurality of users can be acquired, the real-time interactive image is subjected to image processing based on an image processor, and finally, only the processed video image and corresponding audio information need to be imported into a central processing unit for synthesis to obtain video content.
Fig. 2 is a functional block diagram of a preferred embodiment of a video content acquisition apparatus according to the present disclosure.
Referring to fig. 2, the video content acquiring apparatus 20 may be operated in an electronic device. The video content acquiring apparatus 20 may include a plurality of functional modules composed of program code segments. Program codes of respective program segments in the video content acquisition apparatus may be stored in the memory and executed by the at least one processor to perform some or all of the steps of the video content acquisition method described in fig. 1.
In this embodiment, the video content acquiring apparatus 20 may be divided into a plurality of functional modules according to the functions performed by the apparatus. The functional module may include: the system comprises an acquisition module 201, an import module 202, a conversion module 203 and a synthesis module 204. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory.
An obtaining module 201, configured to obtain a real-time interactive image among multiple users in the memory space of the first display card.
The real-time interactive image may refer to an image where two or more users in different locations interact with each other, such as an interactive image of a teacher giving a lecture remotely and an interactive image of a student giving a lecture remotely. Generally, a teacher can see a real-time image of a student on a device screen, and the student can see the real-time image of the teacher on the device screen, but cannot obtain an image of interaction of the teacher and the student, and the real-time interaction image can be an image of interaction of the teacher and the student which is synthesized through technical means.
An importing module 202, configured to import the real-time interactive image into the memory space of the second video card.
In the real-time example of the present invention, the real-time interactive image may be imported into the memory space (ID3D11Texture2D) of the second graphics card, so that the subsequent processing on the real-time interactive image in the memory space of the second graphics card may be performed in an independent operation environment (independent context), which is not affected by the operation of other programs, and the probability of error occurrence is reduced. The independent operation environment may include an independent program, an independent operation resource, and an independent data, that is, data processing in the independent operation environment is not affected by an external environment.
The conversion module 203 is configured to perform video image conversion on the real-time interactive image in the memory space of the second graphics card by using the image processor, so as to obtain a video image.
The image processor (GPU) is a microprocessor dedicated to image and Graphics related operations on personal computers, workstations, game machines and some mobile devices (e.g. tablet computers, smart phones, etc.).
In the embodiment of the invention, the image processor is used for carrying out video image conversion on the real-time interactive image in the memory space of the second display card, and basically no central processing unit is required to participate, so that the load of the central processing unit is reduced, and the system performance is improved.
The importing module 202 is further configured to import the video image and the audio information corresponding to the video image into a central processing unit.
The Central Processing Unit (CPU) may be a core component responsible for reading, decoding, and executing instructions.
A synthesizing module 204, configured to synthesize the video image and the audio information by using the central processing unit, so as to obtain video content.
In this optional embodiment, after the video image and the audio information are imported to a central processing unit, the central processing unit performs calculation to generate final video content and distributes the final video content to a network.
As an optional implementation manner, the manner of acquiring, by the acquiring module 201, the real-time interactive image among the multiple users in the memory space of the first display card specifically is as follows:
creating an image display window;
and reading the data of the memory space of the first display card through the image display window so as to obtain a real-time interactive image among a plurality of users stored in the memory space of the first display card.
In this optional embodiment, a window (the image display window) may be created first to display the real-time interactive image, so that an operator may find a problem in time when processing the image, perform error analysis, and the like. And simultaneously, reading the data of the memory space of the first display card through the image display window so as to obtain a real-time interactive image among a plurality of users stored in the memory space of the first display card. The image display window plays a role of a type intermediate medium, the system can only place the real-time interactive image in the image display window without concerning other things, meanwhile, other programs only need to directly take the real-time interactive image from the image display window according to the window identification (window handle) without concerning other things, the interaction of logic judgment and the like with other programs is avoided, and the operation efficiency is improved.
The window identifier may be a Handle of a window, and the Handle (Handle) may be an identifier used to identify an object or an item, and may be used to describe a form, a file, or the like.
As an optional implementation manner, the manner of acquiring, by the acquiring module 201, the real-time interactive image among the multiple users in the memory space of the first display card specifically is as follows:
determining the transaction state of a real-time interactive image among a plurality of users in the memory space of the first video card;
judging whether the transaction state is an unlocking state;
and if the transaction state of the real-time interactive image is the unlocking state, acquiring the real-time interactive image.
In this alternative embodiment, the real-time interactive image may be directly obtained from a storage space or a program in which the real-time interactive image is stored, but it is necessary to ensure that the real-time interactive image is not locked, and if the real-time interactive image is locked, it indicates that the real-time interactive image is occupied or processed and cannot be used by other programs. And if the transaction state of the real-time interactive image is the unlocking state, the real-time interactive image can be used, namely the real-time interactive image can be obtained.
As an optional implementation manner, the obtaining module 201 is further configured to obtain a first real-time image of a first user and obtain a second real-time image of a second user before obtaining a real-time interactive image among multiple users in the memory space of the first video card;
the video content acquisition apparatus further includes:
the generating module is used for generating the real-time interactive image according to the first real-time image, the second real-time image and a preset virtual scene;
and the storage module is used for storing the real-time interactive image into the memory space of the first display card.
In this alternative embodiment, a first real-time image of the first user may be obtained by the camera device of the first user (for example, a real-time image of a side of the first user is captured), a second real-time image of the second user may be obtained by the camera device of the second user (for example, a real-time image of a side of the second user is captured), and the real-time interactive image may be obtained by fusing the first real-time image and the second real-time image into a preset Virtual scene through a Virtual Reality (VR) technology and a real-time and Mixed Reality (MR) technology by using a phantom 4 engine. The real-time interactive image for each frame may be stored and updated into the first graphics card memory space (e.g., UTextureRenderTarget 2D).
The virtual reality can be realized by generating a simulated environment by using a computer, combining electronic signals generated by using data in real life and through computer technology with various output devices to convert the electronic signals into phenomena which can be felt by people, wherein the phenomena can be true and real objects in reality and can also be substances which can not be seen by the naked eyes, and the phenomena are expressed by a three-dimensional model.
The mixed reality may refer to overlaying real things into a virtual scene, virtualizing the real things (for example, a picture captured by a camera), performing 3D modeling, and then fusing the real things into a virtual 3D scene.
Optionally, a real-time image of the front of the first user and a real-time image of the front of the second user may be captured by another photographing device, and the real-time image of the front of the second user is output on the device of the first user and the real-time image of the front of the first user is output on the device of the second user.
As an optional implementation manner, the conversion module 203 uses the image processor to perform video image conversion on the real-time interactive image in the memory space of the second graphics card, and the manner of obtaining the video image specifically includes:
performing preset standard format conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a standard image;
adjusting the standard image into a target image with a preset size;
and converting the target image into a video image in a preset video format.
In this optional implementation, according to a preset standard format (for example, RGBA format) of the memory space of the second graphics card, format conversion may be performed on the real-time interactive image to obtain a standard image, then the preset format image is adjusted to a target image with a preset size, and finally the target image is converted to a video image with a preset video format, for example, YUV420P format.
In the format YUV420P, YUV is a color space, "Y" represents brightness (Luma), and "U" and "V" represent chroma and concentration, and are completely sampled; YUV420P subsamples pixel points using a 2 x 2 matrix on the basis of full sampling (each pixel point samples individual YUV component information, including the most comprehensive YUV information), and 4 pixel points have individual Y components and share the same UV information, for a total of 6 bytes. One frame of image occupies less total space.
In the video content acquiring apparatus 20 depicted in fig. 2, the real-time interactive image among a plurality of users can be acquired, the image processor performs image processing on the real-time interactive image, and finally, only the processed video image and the corresponding audio information need to be imported into the central processing unit for synthesis, so as to obtain the video content.
As shown in fig. 3, fig. 3 is a schematic structural diagram of an electronic device implementing a video content obtaining method according to a preferred embodiment of the present invention. The electronic device 3 comprises a memory 31, at least one processor 32, a computer program 33 stored in the memory 31 and executable on the at least one processor 32, and at least one communication bus 34.
Those skilled in the art will appreciate that the schematic diagram shown in fig. 3 is merely an example of the electronic device 3, and does not constitute a limitation of the electronic device 3, and may include more or less components than those shown, or combine some components, or different components, for example, the electronic device 3 may further include an input/output device, a network access device, and the like.
The electronic device 3 may also include, but is not limited to, any electronic product that can interact with a user through a keyboard, a mouse, a remote controller, a touch panel, or a voice control device, for example, a Personal computer, a tablet computer, a smart phone, a Personal Digital Assistant (PDA), a game console, an Internet Protocol Television (IPTV), a smart wearable device, and the like. The Network where the electronic device 3 is located includes, but is not limited to, the internet, a wide area Network, a metropolitan area Network, a local area Network, a Virtual Private Network (VPN), and the like.
The at least one Processor 32 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a discrete hardware component, etc. The processor 32 may be a microprocessor or the processor 32 may be any conventional processor or the like, and the processor 32 is a control center of the electronic device 3 and connects various parts of the whole electronic device 3 by various interfaces and lines.
The memory 31 may be used to store the computer program 33 and/or the module/unit, and the processor 32 may implement various functions of the electronic device 3 by running or executing the computer program and/or the module/unit stored in the memory 31 and calling data stored in the memory 31. The memory 31 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data created according to the use of the electronic device 3, and the like. In addition, the memory 31 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, and the like.
With reference to fig. 1, the memory 31 of the electronic device 3 stores a plurality of instructions to implement a video content acquisition method, and the processor 32 executes the plurality of instructions to implement:
collecting a real-time interactive image;
creating a graphics processor-based stand-alone operating environment;
importing the real-time interactive image into a memory space of a first display card in the independent operation environment;
performing video image conversion on the real-time interactive image based on the independent operation environment to obtain a video image;
and importing the video graph and the corresponding audio information into a central processing unit for synthesis to obtain video content.
Specifically, the processor 32 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1 for a specific implementation method of the instruction, which is not described herein again.
In the electronic device 3 described in fig. 3, the real-time interactive image among the multiple users can be acquired, the image processor performs image processing on the real-time interactive image, and finally, only the processed video image and the corresponding audio information need to be imported into the central processing unit for synthesis, so as to obtain the video content.
The integrated modules/units of the electronic device 3 may be stored in a computer-readable storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program code may be in source code form, object code form, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying said computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms second, etc. are used to denote names, but not any particular order.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (9)

1. A video content obtaining method is applied to electronic equipment, the electronic equipment comprises an image processor and a central processing unit, and the video content obtaining method is characterized in that a memory space of the image processor comprises a first display card memory space and a second display card memory space, and the video content obtaining method comprises the following steps:
acquiring real-time interactive images among a plurality of users in the memory space of the first display card, wherein the method comprises the following steps: creating an image display window; reading data of the memory space of the first display card through the image display window, and obtaining a real-time interactive image among a plurality of users stored in the memory space of the first display card;
importing the real-time interactive image into the memory space of the second display card;
performing video image conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a video image;
importing the video image and the audio information corresponding to the video image into a central processing unit;
and synthesizing the video image and the audio information by using the central processing unit to obtain video content.
2. The method according to claim 1, wherein the obtaining of the real-time interactive image among the plurality of users in the memory space of the first video card comprises:
determining the transaction state of a real-time interactive image among a plurality of users in the memory space of the first video card;
judging whether the transaction state is an unlocking state;
and if the transaction state of the real-time interactive image is the unlocking state, acquiring the real-time interactive image.
3. The method according to claim 1, wherein before the obtaining of the real-time interactive image among the plurality of users in the memory space of the first video card, the method further comprises:
acquiring a first real-time image of a first user and acquiring a second real-time image of a second user;
generating the real-time interactive image according to the first real-time image, the second real-time image and a preset virtual scene;
and storing the real-time interactive image into the memory space of the first display card.
4. The video content obtaining method according to any one of claims 1 to 3, wherein the using the image processor to perform video image conversion on the real-time interactive image in the memory space of the second graphics card to obtain a video image comprises:
performing preset standard format conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a standard image;
adjusting the standard image into a target image with a preset size;
and converting the target image into a video image in a preset video format.
5. A video content acquisition apparatus, characterized in that the video content acquisition apparatus comprises:
the acquisition module is used for acquiring real-time interactive images among a plurality of users in the memory space of the first display card, and comprises: creating an image display window; reading data of the memory space of the first display card through the image display window, and obtaining a real-time interactive image among a plurality of users stored in the memory space of the first display card;
the import module is used for importing the real-time interactive image into a memory space of a second display card;
the conversion module is used for performing video image conversion on the real-time interactive image in the memory space of the second display card by using the image processor to obtain a video image;
the import module is further used for importing the video image and the audio information corresponding to the video image into a central processing unit;
and the synthesis module is used for synthesizing the video image and the audio information by using the central processing unit to obtain video content.
6. The video content acquiring apparatus according to claim 5, wherein the manner of acquiring the real-time interactive image among the plurality of users in the memory space of the first video card by the acquiring module is specifically:
creating an image display window;
and reading the data of the memory space of the first display card through the image display window so as to obtain a real-time interactive image among a plurality of users stored in the memory space of the first display card.
7. The video content acquiring apparatus according to claim 5, wherein the manner of acquiring the real-time interactive image among the plurality of users in the memory space of the first video card by the acquiring module is specifically:
determining the transaction state of a real-time interactive image among a plurality of users in the memory space of the first video card;
judging whether the transaction state is an unlocking state;
and if the transaction state of the real-time interactive image is the unlocking state, acquiring the real-time interactive image.
8. An electronic device, characterized in that the electronic device comprises a processor and a memory, the processor being configured to execute a computer program stored in the memory to implement the video content acquisition method according to any one of claims 1 to 4.
9. A computer-readable storage medium storing at least one instruction which, when executed by a processor, implements the video content acquisition method of any one of claims 1 to 4.
CN202010974663.9A 2020-09-16 2020-09-16 Video content acquisition method and device, electronic equipment and storage medium Active CN112135157B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010974663.9A CN112135157B (en) 2020-09-16 2020-09-16 Video content acquisition method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010974663.9A CN112135157B (en) 2020-09-16 2020-09-16 Video content acquisition method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112135157A CN112135157A (en) 2020-12-25
CN112135157B true CN112135157B (en) 2022-07-05

Family

ID=73845872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010974663.9A Active CN112135157B (en) 2020-09-16 2020-09-16 Video content acquisition method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112135157B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203457245U (en) * 2013-07-31 2014-02-26 重庆顺利科技有限公司 Police inquest system
US20150002682A1 (en) * 2010-02-26 2015-01-01 Bao Tran High definition camera
CN111417014A (en) * 2020-03-20 2020-07-14 威比网络科技(上海)有限公司 Video generation method, system, device and storage medium based on online education
CN111598768A (en) * 2020-07-23 2020-08-28 平安国际智慧城市科技股份有限公司 Image optimization processing method and device, computer equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002682A1 (en) * 2010-02-26 2015-01-01 Bao Tran High definition camera
CN203457245U (en) * 2013-07-31 2014-02-26 重庆顺利科技有限公司 Police inquest system
CN111417014A (en) * 2020-03-20 2020-07-14 威比网络科技(上海)有限公司 Video generation method, system, device and storage medium based on online education
CN111598768A (en) * 2020-07-23 2020-08-28 平安国际智慧城市科技股份有限公司 Image optimization processing method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN112135157A (en) 2020-12-25

Similar Documents

Publication Publication Date Title
US10110936B2 (en) Web-based live broadcast
CN108010037B (en) Image processing method, device and storage medium
CN112135158B (en) Live broadcasting method based on mixed reality and related equipment
CN107437272B (en) Interactive entertainment method and device based on augmented reality and terminal equipment
CN105740029A (en) Content presentation method, user equipment and system
TW201131511A (en) Super-resolution method for video display
CN109819316B (en) Method and device for processing face sticker in video, storage medium and electronic equipment
CN103997687A (en) Techniques for adding interactive features to videos
CN114025219A (en) Rendering method, device, medium and equipment for augmented reality special effect
CN113660528B (en) Video synthesis method and device, electronic equipment and storage medium
CN112492231B (en) Remote interaction method, device, electronic equipment and computer readable storage medium
WO2021227919A1 (en) Method and device for image data encoding, display method and device, and electronic device
CN110012336A (en) Picture configuration method, terminal and the device at interface is broadcast live
WO2022116516A1 (en) Method and device for providing video source for video conference system
CN205281405U (en) Image recognition system based on augmented reality
US20210258621A1 (en) Method and system for automatic real-time frame segmentation of high resolution video streams into constituent features and modifications of features in each frame to simultaneously create multiple different linear views from same video source
CN106412718A (en) Rendering method and device for subtitles in 3D space
CN112135157B (en) Video content acquisition method and device, electronic equipment and storage medium
CN112153472A (en) Method and device for generating special picture effect, storage medium and electronic equipment
CN108268139B (en) Virtual scene interaction method and device, computer device and readable storage medium
CN107452067B (en) Demonstration method and device based on augmented reality and terminal equipment
CN116489424A (en) Live background generation method and device, electronic equipment and computer readable medium
CN112954452B (en) Video generation method, device, terminal and storage medium
WO2022174517A1 (en) Crowd counting method and apparatus, computer device and storage medium
CN108805951B (en) Projection image processing method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant