CN112511811B - Multi-camera processing method and device and electronic equipment - Google Patents

Multi-camera processing method and device and electronic equipment Download PDF

Info

Publication number
CN112511811B
CN112511811B CN202110146783.4A CN202110146783A CN112511811B CN 112511811 B CN112511811 B CN 112511811B CN 202110146783 A CN202110146783 A CN 202110146783A CN 112511811 B CN112511811 B CN 112511811B
Authority
CN
China
Prior art keywords
camera
user
resolution
restarted
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110146783.4A
Other languages
Chinese (zh)
Other versions
CN112511811A (en
Inventor
范旭宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tuoke Network Technology Co ltd
Original Assignee
Beijing Tuoke Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tuoke Network Technology Co ltd filed Critical Beijing Tuoke Network Technology Co ltd
Priority to CN202110146783.4A priority Critical patent/CN112511811B/en
Publication of CN112511811A publication Critical patent/CN112511811A/en
Application granted granted Critical
Publication of CN112511811B publication Critical patent/CN112511811B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1101Session protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/70Media network packetisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25808Management of client data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/258Client or end-user data management, e.g. managing client capabilities, user preferences or demographics, processing of multiple end-users preferences to derive collaborative data
    • H04N21/25866Management of end-user data
    • H04N21/25875Management of end-user data involving end-user authentication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Graphics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Computer Security & Cryptography (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides a multi-camera processing method, a multi-camera processing device and electronic equipment, wherein in scenes of multi-scene pictures such as hand motions, blackboard pictures and the like which need to be multidirectional, such as a piano classroom, a picture classroom, an offline online blackboard classroom and the like, a multi-camera streaming media technology is used for carrying out multi-scene display on teaching contents of teachers, and user experience is improved.

Description

Multi-camera processing method and device and electronic equipment
Technical Field
The invention relates to the technical field of online education, in particular to a multi-camera processing method and device and electronic equipment.
Background
At present, in many scenes of an online classroom, for example, a piano classroom, a drawing classroom, an offline online blackboard classroom and other scenes which need multi-scene pictures such as hand actions, blackboard pictures and the like in multiple directions, the teaching content of a teacher is displayed in multiple scenes.
Disclosure of Invention
In order to solve the above problem, an object of the embodiments of the present invention is to provide a multi-camera processing method and apparatus, and an electronic device.
In a first aspect, an embodiment of the present invention provides a multi-camera processing method, including:
when a user enters a room of an online classroom, user equipment acquires a user identifier of the user and room configuration information of the room entered by the user; the room configuration information carries a multi-camera starting parameter;
when the multi-camera starting parameter indicates that the user equipment needs to start the multi-camera, acquiring a camera list stored by the user equipment; the camera list carries camera identification of the camera installed by the user equipment;
identifying a plurality of cameras which are installed on the user equipment and can acquire color pictures based on camera type characters in the camera identification;
starting a plurality of cameras capable of collecting color pictures, and acquiring equipment identification of each camera in the plurality of cameras and video data collected by each camera;
obtaining an extended identifier of each camera by using the user identifier and the equipment identifier of each camera;
sending the obtained extended identification of each camera to a signaling server;
when feedback information sent when the signaling server receives the extension identification of each camera is acquired, acquiring an encoding mode and UDP Protocol Description information supported by the user equipment, and generating Description Session Protocol (SDP) Protocol information by using the acquired encoding mode and UDP Protocol Description information;
sending the generated SDP protocol information to the signaling server for media protocol negotiation, and receiving SDP information fed back by the signaling server; wherein the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server;
coding the video data collected by each camera by using the coding mode recorded in the SDP information to obtain video stream data to be issued of each camera;
establishing a media link between each camera and the media server according to the address information of the media server;
and respectively issuing the video stream data of each camera by using the established media link of each camera.
In a second aspect, an embodiment of the present invention further provides a multi-camera processing apparatus, including:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a user identifier of a user and room configuration information of a room which the user enters when the user enters the room of an online classroom; the room configuration information carries a multi-camera starting parameter;
the second obtaining module is used for obtaining a camera list stored by the user equipment when the multi-camera starting parameter indicates that the user equipment needs to start the multi-camera; the camera list carries camera identification of the camera installed by the user equipment;
the identification module is used for identifying a plurality of cameras which are arranged on the user equipment and can acquire color pictures based on the camera type characters in the camera identification;
the starting module is used for starting a plurality of cameras capable of collecting color pictures and acquiring equipment identifications of the cameras and video data collected by the cameras;
the processing module is used for obtaining the extended identification of each camera by using the user identification and the equipment identification of each camera;
the sending module is used for sending the obtained extended identification of each camera to the signaling server;
a third obtaining module, configured to, when obtaining feedback information sent when the signaling server receives the extension identifier of each camera, obtain a coding mode and UDP protocol description information supported by the user equipment, and generate SDP protocol information by using the obtained coding mode and UDP protocol description information;
a negotiation module, configured to send the generated SDP protocol information to the signaling server for media protocol negotiation, and receive SDP information fed back by the signaling server; wherein the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server;
the encoding module is used for encoding the video data acquired by each camera by using the encoding mode recorded in the SDP information to obtain video stream data to be issued of each camera;
the link module is used for establishing a media link between each camera and the media server according to the address information of the media server;
and the release module is used for respectively releasing the video stream data of each camera by utilizing the established media link of each camera.
In a third aspect, the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method in the first aspect.
In a fourth aspect, embodiments of the present invention also provide an electronic device, which includes a memory, a processor, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor to perform the steps of the method according to the first aspect.
In the solutions provided in the foregoing first to fourth aspects of the embodiments of the present invention, after a user enters a room in an online classroom, a user equipment may obtain room configuration information of the room entered by the user, when the multi-camera opening parameter carried in the room configuration information indicates that the user equipment needs to open the multi-camera, acquiring a camera list stored by user equipment, determining a plurality of cameras capable of acquiring color pictures in the user equipment, then starting a plurality of cameras capable of collecting color pictures, acquiring the equipment identification of each camera in the plurality of cameras and the video data of each camera, obtaining the extension identification of each camera by using the user identification and the equipment identification of each camera, sending the obtained extension identification of each camera to a signaling server, carrying out media protocol negotiation with a signaling server, and receiving SDP information fed back by the signaling server; wherein, the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server; coding the video data collected by each camera by using a coding mode recorded in SDP information to obtain video stream data to be issued of each camera, establishing a media link between each camera and a media server according to address information of the media server, and issuing the video stream data of each camera by using the established media link of each camera; therefore, in scenes such as a piano classroom, a picture classroom, an offline online blackboard classroom and the like which need multi-directional multi-scene pictures such as hand actions, blackboard pictures and the like, the multi-camera streaming media technology is used for carrying out multi-scene display on teaching contents of teachers, and user experience is improved.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flowchart illustrating a multi-camera processing method according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a multi-camera processing apparatus according to embodiment 2 of the present invention;
fig. 3 shows a schematic structural diagram of an electronic device provided in embodiment 3 of the present invention.
Detailed Description
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "length", "width", "thickness", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", "clockwise", "counterclockwise", and the like, indicate orientations and positional relationships based on those shown in the drawings, and are used only for convenience of description and simplicity of description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be considered as limiting the present invention.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
In the present invention, unless otherwise expressly specified or limited, the terms "mounted," "connected," "secured," and the like are to be construed broadly and can, for example, be fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
At present, in many scenes of an online classroom, for example, a piano classroom, a drawing classroom, an offline online blackboard classroom and other scenes which need multi-scene pictures such as hand actions, blackboard pictures and the like in multiple directions, the teaching content of a teacher is displayed in multiple scenes.
Based on this, the present embodiment provides a multi-camera processing method, an apparatus and an electronic device, after a user enters a room in an online classroom, a user device may obtain room configuration information of the room entered by the user, when a multi-camera start parameter carried in the room configuration information indicates that the user device needs to start a multi-camera, obtain a camera list stored by the user device itself, determine a plurality of cameras capable of collecting color pictures in the user device, then start the plurality of cameras capable of collecting color pictures, obtain device identifiers of the cameras in the plurality of cameras and video data of the cameras, obtain extension identifiers of the cameras by using the user identifiers and the device identifiers of the cameras, send the obtained extension identifiers of the cameras to a signaling server, and perform media protocol negotiation with the signaling server, receiving SDP information fed back by a signaling server; wherein, the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server; coding the video data collected by each camera by using a coding mode recorded in SDP information to obtain video stream data to be issued of each camera, establishing a media link between each camera and a media server according to address information of the media server, and issuing the video stream data of each camera by using the established media link of each camera; therefore, in scenes such as a piano classroom, a picture classroom, an offline online blackboard classroom and the like which need multi-directional multi-scene pictures such as hand actions, blackboard pictures and the like, the multi-camera streaming media technology is used for carrying out multi-scene display on teaching contents of teachers, and user experience is improved.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, the present application is described in further detail with reference to the accompanying drawings and the detailed description.
Example 1
The embodiment provides a multi-camera processing method, and an execution main body is user equipment.
The user equipment includes but is not limited to: mobile terminals and portable computers.
The user equipment can log in a website of the online education platform through an APP of the installed online education platform or a browser arranged on the user equipment, and interacts with the running online education platform.
The user equipment can also interact with a signaling server.
Moreover, the user devices, which are used by users of other online education platforms, may constitute a blockchain system and be registered as general users in the blockchain system.
In order to verify the user identity of the user, the online education platform registers to become a verification node in the blockchain system.
The user, may be, but is not limited to: a student, a teacher, or a course manager of a school.
Referring to a flow chart of a multi-camera processing method shown in fig. 1, the present embodiment provides a multi-camera processing method, which includes the following specific steps:
step 100, when a user enters a room of an online classroom, user equipment acquires a user identifier of the user and room configuration information of the room entered by the user; and the room configuration information carries a multi-camera starting parameter.
In step 100, the user device is a computing device used by the user to interact with the online education platform.
When a user uses user equipment to perform online course learning on an online education platform in an APP mode of the online education platform or in a website mode of logging in the online education platform through a browser, the user needs to input a user name and a password to log in the online education platform. Then, in order to verify the identity of the user, the online education platform may perform the following steps (1) to (3):
(1) acquiring a user type input by the user, splicing the user type and a user name input by the user to obtain an authentication character string, and performing hash calculation on the authentication character string to obtain an authentication hash value;
(2) inquiring an identity key corresponding to the user name from a block chain system by using the user name input by the user;
(3) and when the identity authentication hash value is the same as the inquired identity key, allowing the user to enter a room of the online classroom.
In the step (1), the online education platform prompts the user to input the user type after the user logs in, so that the user type input by the user is obtained.
The user types include, but are not limited to: student type, teacher type, and administrator type.
The student type has the lowest student authority on the online education platform; the user with the student authority can only perform the learning operation on the online education platform.
The teacher type has a moderate teacher authority on the online education platform, and the user with the teacher authority can arrange the teaching task of the user and carry out the teaching task when the class time is up.
The administrator type has the highest administrator authority on the online education platform, and teaching tasks arranged by users with teacher authority can be checked and edited, so that the situation that the teaching tasks arranged by different teachers conflict in time is avoided.
The user type can be represented by a character; also, the user name is a character string. Then, performing hash calculation on the authentication character string after the user type and the user name are spliced, that is, performing hash calculation on one character string, and then the specific process of obtaining the authentication hash value is the existing hash calculation process, which is not described herein again.
In the step (2), the block chain system stores a corresponding relationship between the user name and the identity key in advance.
The user type of the user is the user type of the user given by the online education platform after the user logs in the online education platform for the first time and registers, and the user identification is distributed to the user while the user type of the user is given by the online education platform after the user identity information of the user passes through the identity authentication of the online education platform. After obtaining the user type of the user, the online education platform splices the user type and the user name, carries out Hash calculation on a character string obtained after splicing the user type and the user name to obtain an identity key of the user, then generates a corresponding relation between the user name and the identity key by using the user type of the user and the identity key obtained by Hash calculation, and sends the generated corresponding relation between the user name and the identity key to the block chain system for storage.
And besides, the online education platform generates the corresponding relation between the user name and the user identification in addition to the generated corresponding relation between the user name and the identity key, and stores the corresponding relation between the user name and the user identification in the block chain system.
While the user enters a room of the online classroom, the online education platform sends room configuration information of the room entered by the user and a user identifier assigned to the user at the time of user registration to the user device. Causing a user device to obtain a user identification of the user and room configuration information of the room entered by the user.
Specifically, the online education platform acquires the corresponding relation between the user name and the user identifier stored in the blockchain system from the blockchain system, then uses the user name input by the user to inquire the user identifier corresponding to the user name from the corresponding relation between the user name and the user identifier, and feeds the inquired user identifier back to the user equipment used by the user; and the user equipment acquires the user identification of the user.
Room configuration information for the room, including but not limited to: resolution and frame rate that the cameras can use, and multiple camera turn-on parameters.
In one embodiment, the multi-camera turn-on parameter may be 0 or 1.
When the multi-camera starting parameter is 0, indicating that the user equipment does not need to start the multi-camera; and when the multi-camera starting parameter is 1, indicating that the user equipment needs to start the multi-camera.
102, when the multi-camera starting parameter indicates that the user equipment needs to start the multi-camera, acquiring a camera list stored by the user equipment; and the camera list carries the camera identification of the camera installed by the user equipment.
In step 102, when the multi-camera start parameter is 1, it is determined that the multi-camera start parameter indicates that the user equipment needs to start multiple cameras.
The camera identification is a character string, and the characters in the character string may include but are not limited to: the camera system comprises a camera, a display device and a control device, wherein the camera comprises a camera type display device, a camera type display device and a control device, the camera type display device comprises a camera type display device and a camera type display device, the camera type display device comprises a camera, a camera type display. The camera type character may be an X-th character located in a display direction in the camera identification.
And 104, identifying a plurality of cameras which are installed on the user equipment and can acquire color pictures based on the camera type characters in the camera identification.
In step 104, since the camera type character in the camera identifier is the X-th character in the display direction in the camera identifier, the user equipment may read the X-th character in the display direction in the camera identifier, and compare the read character with the camera type of the camera capable of acquiring the color picture recorded in the camera type table to obtain a comparison result; and when the read characters indicated by the comparison result are the same as the camera types of the cameras capable of collecting the color pictures recorded in the camera type table, determining that the camera corresponding to the camera identification with the camera type characters is the camera capable of collecting the color pictures.
And the camera type table is cached in the user equipment in advance. The camera type table records a camera type of a communication camera, a camera type of a remote sensing camera and a camera type of an ultraviolet camera besides the camera type of the camera capable of collecting color pictures.
And 106, starting a plurality of cameras capable of acquiring color pictures, and acquiring the equipment identification of each camera in the plurality of cameras and the video data acquired by each camera.
In step 106, after the camera is turned on, the online education platform assigns a camera identification to the turned-on camera. And after the camera is started, the default resolution and frame rate of the camera recorded in the attribute information of the camera are used for acquiring video data, and the acquired video data are fed back to the user equipment for caching.
In order to facilitate the query of the attribute information of the camera, a camera identifier may be set in the attribute information of the camera.
Besides the default resolution and frame rate used by the camera to acquire the video data and the camera identification, the attribute information of the camera also records the pixel value of the camera.
The plurality of cameras are cameras capable of collecting color pictures.
Each camera is a plurality of cameras capable of collecting color pictures.
And step 108, obtaining the extended identification of each camera by using the user identification and the equipment identification of each camera.
In step 108, in order to obtain the extension identifier of each camera, the MD5 value of the character string after the user identifier and the device identifier of each camera are spliced may be calculated as the extension identifier of each camera.
And step 110, sending the obtained extension identification of each camera to a signaling server.
In step 110, after receiving the extension identifier of each camera sent by the user equipment, the signaling server sends feedback information to the user equipment.
And step 112, when feedback information sent when the signaling server receives the extension identifier of each camera is obtained, obtaining the coding mode and the UDP protocol description information supported by the user equipment, and generating SDP protocol information by using the obtained coding mode and the UDP protocol description information.
In step 112, the coding scheme and UDP protocol description information supported by the ue are recorded in the system information of the ue.
The process of generating the SDP protocol information by using the obtained encoding method and the UDP protocol description information is the prior art, and is not described herein again.
Step 114, sending the generated SDP protocol information to the signaling server for media protocol negotiation, and receiving SDP information fed back by the signaling server; wherein the SDP information includes: and the coding mode adopted when the camera transmits the video and the address information of the media server.
In the step 114, a specific process of performing media protocol negotiation is the prior art, and is not described herein again.
And step 116, encoding the video data acquired by each camera by using the encoding mode recorded in the SDP information to obtain video stream data to be issued of each camera.
In step 116, after obtaining the video stream data to be distributed of each camera, the user equipment sets the extension identifier of each camera to the video stream data to be distributed of each camera.
And step 118, establishing a media link between each camera and the media server according to the address information of the media server.
In step 118, a process of establishing a media link between each camera and the media server according to the address information of the media server is the prior art, and is not described herein again.
And step 120, respectively issuing the video stream data of each camera by using the established media link of each camera.
When the user equipment utilizes the multiple cameras to distribute video data, the data collected by the multiple cameras can be simultaneously coded and distributed, so that a lot of system resources and network resources of the user equipment can be occupied, and the user equipment is easy to crash under the condition that the occupancy rates of the system resources and the network resources of the user equipment are high, so that the teaching activity of distributing the video data by the multiple cameras cannot be normally carried out. In order to keep the system resources and network resources of the user equipment from being excessively occupied when distributing video data by using multiple cameras, the multiple-camera processing method provided by the embodiment may perform the following steps (1) to (7):
(1) acquiring system load information and network load information;
(2) when it is determined that the system load information is greater than a system load threshold and the network load information is greater than a network load threshold, determining that the user equipment is in an overload state;
(3) when the user equipment is determined to be in an overload state, acquiring the resolution currently used by each camera and a resolution list; wherein, the resolution list records the corresponding relationship between resolution and frame rate;
(4) determining a camera which uses the maximum resolution at present according to the resolution currently used by each camera, determining the camera which uses the maximum resolution at present as a camera to be restarted, and caching a video stream data frame which is currently issued by the camera to be restarted;
(5) closing the camera to be restarted, and releasing a cached video stream data frame on a media link of the camera to be restarted;
(6) selecting the maximum resolution from resolutions lower than the resolution used by the camera to be restarted, which are recorded in the resolution list, as the resolution used by the camera to be restarted after restarting, and determining the frame rate corresponding to the selected resolution as the frame rate for acquiring video stream data after restarting the camera to be restarted;
(7) and restarting the camera to be restarted by using the selected resolution ratio and the selected frame rate, so that the restarted camera continuously collects video stream data by using the selected resolution ratio and the selected frame rate, stopping releasing the cached video stream data frame on a media link of the restarted camera when the video stream data collected by the restarted camera is obtained, and simultaneously releasing the video stream data collected by the restarted camera.
In the step (1), the user equipment acquires the system load information and the network load information from a manager stored in the user equipment.
Wherein the system load information includes: CPU utilization rate and memory utilization rate; the network load information includes: packet loss rate and network delay time.
In the step (2), the system load threshold may be set to any value between 85% and 95%.
And when the CPU utilization rate is greater than the system load threshold value or the memory utilization rate is greater than the system load threshold value, determining that the system load information is greater than the system load threshold value.
For the packet loss rate, the network load threshold is a 20% packet loss rate; the network load threshold is 500 milliseconds for network delay time. Namely: and when the packet loss rate is greater than 20% or the network delay time is greater than 500 milliseconds, determining that the network load information is greater than a network load threshold value.
In the step (3), the user equipment acquires the attribute information of each camera by using the camera identifier of each camera, and uses the default resolution described in the attribute information of each camera as the resolution currently used by each camera.
The resolution list is stored in the user equipment.
It can be seen from the content described in the above steps (1) to (7) that the states of the system and the network are monitored in real time by acquiring the system load information and the network load information, and when it is determined that the system load information is greater than the system load threshold and the network load information is greater than the network load threshold, the resolution and the frame rate used by the camera currently using the maximum resolution to acquire video data in each camera are reduced in a camera restarting manner, so that the pressure on the system resources and the network resources of the user equipment in the process of acquiring and distributing the video data by multiple cameras is reduced, and the defect that the teaching task cannot be continued due to collapse of the user equipment is avoided; the camera is restarted for a short time, so that the user can only feel that the picture is blocked and the use feeling of the user is not influenced; therefore, the user equipment can be stable in a scene using a plurality of cameras as when one camera is used.
In addition to the above operations on multiple cameras, the user equipment may also be made to subscribe to a video stream, and watch multiple videos simultaneously:
(1) when user equipment issues video data through multiple cameras, a signaling server pushes notification information for adding the video data to a room at the moment, wherein the notification information comprises attribute characteristics of the video data, such as user identification, video data identification, equipment identification of the cameras and the like; the video data identification is distributed to each camera after the signaling server obtains the extension identification of each camera;
(2) after other users in the room receive the notification information, a far-end stream object is created through the user identifier, the video data identifier and the equipment identifier of the camera carried in the notification information, and the stream object allocates an extended identification code (the rule for allocating the extended identification code is that a character string obtained by splicing the user identifier carried in the notification information and the equipment identifier of the camera is calculated to generate an md5 value serving as the extended identification code);
(3) the method comprises the steps that as with the video data publishing process, the video data need to be respectively subscribed through a signaling server, stream information is sent to an online education platform, and subscription is started;
(4) the subscription is successful. Creating a media link object RTCConnection (generally referred to as a downlink of video data) for each stream, receiving the video data;
(5) the subscription also requires that the SDP protocol be sent to the signaling server for negotiation and then UDP media link be established with the media server. When a user watches one camera, the user needs to subscribe once, the user subscribes video data of a plurality of cameras, and a plurality of media links also need to be created;
(6) the media link establishment is complete. The media server will push the real-time video data of the opposite terminal, the RTCConnection will receive the video data issued by each far-end camera, and at this time, the pictures of the multiple cameras can be seen in the room.
In summary, in the multi-camera processing method provided in this embodiment, after the user enters the room in the online classroom, the user equipment may obtain the room configuration information of the room entered by the user, when the multi-camera opening parameter carried in the room configuration information indicates that the user equipment needs to open the multi-camera, acquiring a camera list stored by user equipment, determining a plurality of cameras capable of acquiring color pictures in the user equipment, then starting a plurality of cameras capable of collecting color pictures, acquiring the equipment identification of each camera in the plurality of cameras and the video data of each camera, obtaining the extension identification of each camera by using the user identification and the equipment identification of each camera, sending the obtained extension identification of each camera to a signaling server, carrying out media protocol negotiation with a signaling server, and receiving SDP information fed back by the signaling server; wherein, the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server; coding the video data collected by each camera by using a coding mode recorded in SDP information to obtain video stream data to be issued of each camera, establishing a media link between each camera and a media server according to address information of the media server, and issuing the video stream data of each camera by using the established media link of each camera; therefore, in scenes such as a piano classroom, a picture classroom, an offline online blackboard classroom and the like which need multi-directional multi-scene pictures such as hand actions, blackboard pictures and the like, the multi-camera streaming media technology is used for carrying out multi-scene display on teaching contents of teachers, and user experience is improved.
Example 2
A multi-camera processing apparatus proposed in this embodiment is configured to execute the multi-camera processing method described in embodiment 1 above.
Referring to a schematic structural diagram of a multi-camera processing apparatus shown in fig. 2, the present embodiment provides a multi-camera processing apparatus, including:
a first obtaining module 200, configured to obtain, when a user enters a room of an online classroom, a user identifier of the user and room configuration information of the room that the user enters; the room configuration information carries a multi-camera starting parameter;
a second obtaining module 202, configured to obtain a camera list stored in the user equipment when the multi-camera start parameter indicates that the user equipment needs to start multiple cameras; the camera list carries camera identification of the camera installed by the user equipment;
the identification module 204 is configured to identify, based on the camera type characters in the camera identifier, a plurality of cameras which are installed in the user equipment and can acquire color pictures;
the starting module 206 is configured to start a plurality of cameras capable of acquiring color pictures, and acquire an equipment identifier of each camera in the plurality of cameras and video data acquired by each camera;
the processing module 208 is configured to obtain an extended identifier of each camera by using the user identifier and the device identifier of each camera;
a sending module 210, configured to send the obtained extension identifier of each camera to a signaling server;
a third obtaining module 212, configured to, when obtaining feedback information sent when the signaling server receives the extension identifier of each camera, obtain a coding mode and UDP protocol description information supported by the user equipment, and generate SDP protocol information by using the obtained coding mode and UDP protocol description information;
a negotiation module 214, configured to send the generated SDP protocol information to the signaling server for performing media protocol negotiation, and receive SDP information fed back by the signaling server; wherein the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server;
the encoding module 216 is configured to encode the video data acquired by each camera by using the encoding mode recorded in the SDP information to obtain video stream data to be issued for each camera;
a link module 218, configured to establish a media link between each camera and the media server according to the address information of the media server;
the issuing module 220 is configured to issue the video stream data of each camera by using the established media link of each camera.
Optionally, the multi-camera processing apparatus provided in this embodiment further includes:
the first acquisition unit is used for acquiring the resolution currently used by each camera and a resolution list when the user equipment is determined to be in an overload state; wherein, the resolution list records the corresponding relationship between resolution and frame rate;
the first processing unit is used for determining a camera which uses the maximum resolution at present according to the resolution currently used by each camera, determining the camera which uses the maximum resolution at present as a camera to be restarted, and caching a video stream data frame currently issued by the camera to be restarted;
the second processing unit is used for closing the camera to be restarted and releasing a cached video stream data frame on a media link of the camera to be restarted;
the third processing unit is used for selecting the maximum resolution as the resolution used by the camera to be restarted from the resolutions which are recorded in the resolution list and are lower than the resolution used by the camera to be restarted, and determining the frame rate corresponding to the selected resolution as the frame rate for acquiring video stream data after the camera to be restarted is restarted;
and the restarting unit is used for restarting the camera to be restarted by utilizing the selected resolution ratio and the selected frame rate, so that the restarted camera continuously collects video stream data by utilizing the selected resolution ratio and the selected frame rate, stopping issuing the cached video stream data frame on a media link of the restarted camera when the video stream data collected by the restarted camera is obtained, and simultaneously issuing the video stream data collected by the restarted camera.
Optionally, the multi-camera processing apparatus provided in this embodiment further includes:
the second acquisition unit is used for acquiring system load information and network load information;
a fourth processing unit, configured to determine that the user equipment is in an overload state when it is determined that the system load information is greater than a system load threshold and the network load information is greater than a network load threshold.
Optionally, the multi-camera processing apparatus provided in this embodiment further includes:
a third obtaining unit, configured to obtain a user type input by the user, splice the user type and a user name input by the user to obtain an authentication string, and perform hash calculation on the authentication string to obtain an authentication hash value;
the query unit is used for querying an identity key corresponding to the user name from the block chain system by using the user name input by the user;
and the fifth processing unit is used for allowing the user to enter a room of the online classroom when the identity authentication hash value is the same as the inquired identity key.
In summary, in the multi-camera processing method provided in this embodiment, after the user enters the room in the online classroom, the user equipment may obtain the room configuration information of the room entered by the user, when the multi-camera opening parameter carried in the room configuration information indicates that the user equipment needs to open the multi-camera, acquiring a camera list stored by user equipment, determining a plurality of cameras capable of acquiring color pictures in the user equipment, then starting a plurality of cameras capable of collecting color pictures, acquiring the equipment identification of each camera in the plurality of cameras and the video data of each camera, obtaining the extension identification of each camera by using the user identification and the equipment identification of each camera, sending the obtained extension identification of each camera to a signaling server, carrying out media protocol negotiation with a signaling server, and receiving SDP information fed back by the signaling server; wherein, the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server; coding the video data collected by each camera by using a coding mode recorded in SDP information to obtain video stream data to be issued of each camera, establishing a media link between each camera and a media server according to address information of the media server, and issuing the video stream data of each camera by using the established media link of each camera; therefore, in scenes such as a piano classroom, a picture classroom, an offline online blackboard classroom and the like which need multi-directional multi-scene pictures such as hand actions, blackboard pictures and the like, the multi-camera streaming media technology is used for carrying out multi-scene display on teaching contents of teachers, and user experience is improved.
Example 3
The present embodiment proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the multi-camera processing method described in embodiment 1 above. For specific implementation, refer to method embodiment 1, which is not described herein again.
In addition, referring to the schematic structural diagram of an electronic device shown in fig. 3, the present embodiment further provides an electronic device, where the electronic device includes a bus 51, a processor 52, a transceiver 53, a bus interface 54, a memory 55, and a user interface 56. The electronic device comprises a memory 55.
In this embodiment, the electronic device further includes: one or more programs stored on the memory 55 and executable on the processor 52, configured to be executed by the processor for performing the following steps (1) to (11):
(1) when a user enters a room of an online classroom, acquiring a user identifier of the user and room configuration information of the room entered by the user; the room configuration information carries a multi-camera starting parameter;
(2) when the multi-camera starting parameter indicates that the user equipment needs to start the multi-camera, acquiring a camera list stored by the user equipment; the camera list carries camera identification of the camera installed by the user equipment;
(3) identifying a plurality of cameras which are installed on the user equipment and can acquire color pictures based on camera type characters in the camera identification;
(4) starting a plurality of cameras capable of collecting color pictures, and acquiring equipment identification of each camera in the plurality of cameras and video data collected by each camera;
(5) obtaining an extended identifier of each camera by using the user identifier and the equipment identifier of each camera;
(6) sending the obtained extended identification of each camera to a signaling server;
(7) when feedback information sent when the signaling server receives the extension identification of each camera is acquired, acquiring a coding mode and UDP protocol description information supported by the user equipment, and generating SDP protocol information by using the acquired coding mode and UDP protocol description information;
(8) sending the generated SDP protocol information to the signaling server for media protocol negotiation, and receiving SDP information fed back by the signaling server; wherein the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server;
(9) coding the video data collected by each camera by using the coding mode recorded in the SDP information to obtain video stream data to be issued of each camera;
(10) establishing a media link between each camera and the media server according to the address information of the media server;
(11) and respectively issuing the video stream data of each camera by using the established media link of each camera.
A transceiver 53 for receiving and transmitting data under the control of the processor 52.
Where a bus architecture (represented by bus 51) is used, bus 51 may include any number of interconnected buses and bridges, with bus 51 linking together various circuits including one or more processors, represented by processor 52, and memory, represented by memory 55. The bus 51 may also link various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further in this embodiment. A bus interface 54 provides an interface between the bus 51 and the transceiver 53. The transceiver 53 may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. For example: the transceiver 53 receives external data from other devices. The transceiver 53 is used for transmitting data processed by the processor 52 to other devices. Depending on the nature of the computing system, a user interface 56, such as a keypad, display, speaker, microphone, joystick, may also be provided.
The processor 52 is responsible for managing the bus 51 and the usual processing, running a general-purpose operating system as described above. And memory 55 may be used to store data used by processor 52 in performing operations.
Alternatively, processor 52 may be, but is not limited to: a central processing unit, a singlechip, a microprocessor or a programmable logic device.
It will be appreciated that the memory 55 in embodiments of the invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic random access memory (ddr Data Rate SDRAM, ddr SDRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 55 of the systems and methods described in this embodiment is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 55 stores elements, executable modules or data structures, or a subset thereof, or an expanded set thereof as follows: an operating system 551 and application programs 552.
The operating system 551 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 552 includes various applications, such as a Media Player (Media Player), a Browser (Browser), and the like, for implementing various application services. A program implementing the method of an embodiment of the present invention may be included in the application 552.
To sum up, the present embodiment provides an electronic device and a computer-readable storage medium, where after a user enters a room in an online classroom, a user device may obtain room configuration information of the room entered by the user, when a multi-camera start parameter carried in the room configuration information indicates that the user device needs to start multiple cameras, obtain a camera list stored by the user device itself, determine multiple cameras capable of collecting color pictures in the user device, then start the multiple cameras capable of collecting color pictures, obtain a device identifier of each camera in the multiple cameras and video data of each camera, obtain an extension identifier of each camera by using the user identifier and the device identifier of each camera, send the obtained extension identifier of each camera to a signaling server, and perform a media protocol negotiation with the signaling server, receiving SDP information fed back by a signaling server; wherein, the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server; coding the video data collected by each camera by using a coding mode recorded in SDP information to obtain video stream data to be issued of each camera, establishing a media link between each camera and a media server according to address information of the media server, and issuing the video stream data of each camera by using the established media link of each camera; therefore, in scenes such as a piano classroom, a picture classroom, an offline online blackboard classroom and the like which need multi-directional multi-scene pictures such as hand actions, blackboard pictures and the like, the multi-camera streaming media technology is used for carrying out multi-scene display on teaching contents of teachers, and user experience is improved.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A multi-camera processing method, comprising:
when a user enters a room of an online classroom, user equipment acquires a user identifier of the user and room configuration information of the room entered by the user; the room configuration information carries a multi-camera starting parameter;
when the multi-camera starting parameter indicates that the user equipment needs to start the multi-camera, acquiring a camera list stored by the user equipment; the camera list carries camera identification of the camera installed by the user equipment;
identifying a plurality of cameras which are installed on the user equipment and can acquire color pictures based on camera type characters in the camera identification;
starting a plurality of cameras capable of collecting color pictures, and acquiring equipment identification of each camera in the plurality of cameras and video data collected by each camera;
obtaining an extended identifier of each camera by using the user identifier and the equipment identifier of each camera;
sending the obtained extended identification of each camera to a signaling server;
when feedback information sent when the signaling server receives the extension identification of each camera is acquired, acquiring a coding mode and UDP protocol description information supported by the user equipment, and generating SDP protocol information by using the acquired coding mode and UDP protocol description information;
sending the generated SDP protocol information to the signaling server for media protocol negotiation, and receiving SDP information fed back by the signaling server; wherein the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server;
coding the video data collected by each camera by using the coding mode recorded in the SDP information to obtain video stream data to be issued of each camera;
establishing a media link between each camera and the media server according to the address information of the media server;
and respectively issuing the video stream data of each camera by using the established media link of each camera.
2. The method of claim 1, further comprising:
when the user equipment is determined to be in an overload state, acquiring the resolution currently used by each camera and a resolution list; wherein, the resolution list records the corresponding relationship between resolution and frame rate;
determining a camera which uses the maximum resolution at present according to the resolution currently used by each camera, determining the camera which uses the maximum resolution at present as a camera to be restarted, and caching a video stream data frame which is currently issued by the camera to be restarted;
closing the camera to be restarted, and releasing a cached video stream data frame on a media link of the camera to be restarted;
selecting the maximum resolution from resolutions lower than the resolution used by the camera to be restarted, which are recorded in the resolution list, as the resolution used by the camera to be restarted after restarting, and determining the frame rate corresponding to the selected resolution as the frame rate for acquiring video stream data after restarting the camera to be restarted;
and restarting the camera to be restarted by using the selected resolution ratio and the selected frame rate, so that the restarted camera continuously collects video stream data by using the selected resolution ratio and the selected frame rate, stopping releasing the cached video stream data frame on a media link of the restarted camera when the video stream data collected by the restarted camera is obtained, and simultaneously releasing the video stream data collected by the restarted camera.
3. The method according to claim 2, wherein when the user equipment is determined to be in an overload state, buffering a video stream data frame currently issued by each camera, and acquiring a first resolution and a first frame rate currently used by each camera, and a resolution list; before the step of recording the corresponding relationship between the resolution and the frame rate in the resolution list, the method further includes:
acquiring system load information and network load information;
when it is determined that the system load information is greater than a system load threshold and the network load information is greater than a network load threshold, determining that the user equipment is in an overload state.
4. The method of claim 1, further comprising:
acquiring a user type input by the user, splicing the user type and a user name input by the user to obtain an authentication character string, and performing hash calculation on the authentication character string to obtain an authentication hash value;
inquiring an identity key corresponding to the user name from a block chain system by using the user name input by the user;
and when the identity authentication hash value is the same as the inquired identity key, allowing the user to enter a room of the online classroom.
5. A multi-camera processing apparatus, comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a user identifier of a user and room configuration information of a room which the user enters when the user enters the room of an online classroom; the room configuration information carries a multi-camera starting parameter;
the second acquisition module is used for acquiring a camera list stored by the user equipment when the multi-camera starting parameter indicates that the user equipment needs to start the multi-camera; the camera list carries camera identification of the camera installed by the user equipment;
the identification module is used for identifying a plurality of cameras which are arranged on the user equipment and can acquire color pictures based on the camera type characters in the camera identification;
the starting module is used for starting a plurality of cameras capable of collecting color pictures and acquiring equipment identifications of the cameras and video data collected by the cameras;
the processing module is used for obtaining the extended identification of each camera by using the user identification and the equipment identification of each camera;
the sending module is used for sending the obtained extended identification of each camera to the signaling server;
a third obtaining module, configured to, when obtaining feedback information sent when the signaling server receives the extension identifier of each camera, obtain a coding mode and UDP protocol description information supported by the user equipment, and generate SDP protocol information by using the obtained coding mode and UDP protocol description information;
a negotiation module, configured to send the generated SDP protocol information to the signaling server for media protocol negotiation, and receive SDP information fed back by the signaling server; wherein the SDP information includes: the coding mode adopted when the camera transmits the video and the address information of the media server;
the encoding module is used for encoding the video data acquired by each camera by using the encoding mode recorded in the SDP information to obtain video stream data to be issued of each camera;
the link module is used for establishing a media link between each camera and the media server according to the address information of the media server;
and the release module is used for respectively releasing the video stream data of each camera by utilizing the established media link of each camera.
6. The apparatus of claim 5, further comprising:
the first acquisition unit is used for acquiring the resolution currently used by each camera and a resolution list when the user equipment is determined to be in an overload state; wherein, the resolution list records the corresponding relationship between resolution and frame rate;
the first processing unit is used for determining a camera which uses the maximum resolution at present according to the resolution currently used by each camera, determining the camera which uses the maximum resolution at present as a camera to be restarted, and caching a video stream data frame currently issued by the camera to be restarted;
the second processing unit is used for closing the camera to be restarted and releasing a cached video stream data frame on a media link of the camera to be restarted;
the third processing unit is used for selecting the maximum resolution as the resolution used by the camera to be restarted from the resolutions which are recorded in the resolution list and are lower than the resolution used by the camera to be restarted, and determining the frame rate corresponding to the selected resolution as the frame rate for acquiring video stream data after the camera to be restarted is restarted;
and the restarting unit is used for restarting the camera to be restarted by utilizing the selected resolution ratio and the selected frame rate, so that the restarted camera continuously collects video stream data by utilizing the selected resolution ratio and the selected frame rate, stopping issuing the cached video stream data frame on a media link of the restarted camera when the video stream data collected by the restarted camera is obtained, and simultaneously issuing the video stream data collected by the restarted camera.
7. The apparatus of claim 6, further comprising:
the second acquisition unit is used for acquiring system load information and network load information;
a fourth processing unit, configured to determine that the user equipment is in an overload state when it is determined that the system load information is greater than a system load threshold and the network load information is greater than a network load threshold.
8. The apparatus of claim 5, further comprising:
a third obtaining unit, configured to obtain a user type input by the user, splice the user type and a user name input by the user to obtain an authentication string, and perform hash calculation on the authentication string to obtain an authentication hash value;
the query unit is used for querying an identity key corresponding to the user name from the block chain system by using the user name input by the user;
and the fifth processing unit is used for allowing the user to enter a room of the online classroom when the identity authentication hash value is the same as the inquired identity key.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of the claims 1 to 4.
10. An electronic device comprising a memory, a processor, and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the processor to perform the steps of the method of any of claims 1-4.
CN202110146783.4A 2021-02-03 2021-02-03 Multi-camera processing method and device and electronic equipment Active CN112511811B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110146783.4A CN112511811B (en) 2021-02-03 2021-02-03 Multi-camera processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110146783.4A CN112511811B (en) 2021-02-03 2021-02-03 Multi-camera processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112511811A CN112511811A (en) 2021-03-16
CN112511811B true CN112511811B (en) 2021-05-04

Family

ID=74952872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110146783.4A Active CN112511811B (en) 2021-02-03 2021-02-03 Multi-camera processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112511811B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101557500B (en) * 2009-05-12 2011-05-25 北京学之途网络科技有限公司 Method for monitoring IPTV user behaviors and system thereof
US9873043B2 (en) * 2014-03-31 2018-01-23 Google Llc Methods, systems, and media for enhancing multiplayer game sessions with asymmetric information
CN109450923B (en) * 2018-11-30 2021-06-15 武汉烽火众智数字技术有限责任公司 Video transmission system and method
JP6791288B2 (en) * 2019-03-22 2020-11-25 株式会社セガ Game system, information processing device and program
CN111865924B (en) * 2020-06-24 2022-07-19 新浪网技术(中国)有限公司 Method and system for monitoring user side

Also Published As

Publication number Publication date
CN112511811A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
US10805380B2 (en) Data transmission method and device
CN111464627B (en) Data processing method, edge server, center server and processing system
CN112434818B (en) Model construction method, device, medium and electronic equipment
US11546317B2 (en) Systems and methods for providing services
CN112615753B (en) Link abnormity tracking method, first node, second node and link
CN114461580A (en) Online document sharing method and device, electronic equipment and storage medium
CA2951525A1 (en) Communication apparatus, communication system, communication management system, and communication control method
CN111105521A (en) Data reading method and device
CN111048083A (en) Voice control method, device and storage medium
CN112511811B (en) Multi-camera processing method and device and electronic equipment
CN111880756A (en) Online classroom screen projection method and device, electronic equipment and storage medium
CN115802007A (en) Monitoring system control method and device based on RTSP (real time streaming protocol) and readable storage medium
CN108989272B (en) Data processing method and device and electronic equipment
CN113660290A (en) Signaling transmission method, device, equipment and storage medium
KR101399746B1 (en) Method and apparatus for providing N-screen image
JP6752944B2 (en) Devices and methods for sharing images received from user terminals with other user terminals
CN113434577A (en) Service data processing method, storage medium and equipment
CN113190270A (en) Network course control system, method, computer device and storage medium
CN111131746A (en) Terminal service control method and device
CN111488190A (en) Screen sharing method and device, computer equipment and storage medium
CN110912959A (en) Device access method and device, management and control system and electronic device
CN110858884A (en) Video conference management device and method and electronic equipment
JP2001285396A (en) Method for data communication set up by communication means, program module for the same and means for the same
CN112883309B (en) Method, device, equipment and medium for accessing application through browser
US20150100622A1 (en) Network Device Mediation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant