CN111935467A - Outer projection arrangement of virtual reality education and teaching - Google Patents

Outer projection arrangement of virtual reality education and teaching Download PDF

Info

Publication number
CN111935467A
CN111935467A CN202010892557.6A CN202010892557A CN111935467A CN 111935467 A CN111935467 A CN 111935467A CN 202010892557 A CN202010892557 A CN 202010892557A CN 111935467 A CN111935467 A CN 111935467A
Authority
CN
China
Prior art keywords
processing
processing end
virtual reality
stage
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010892557.6A
Other languages
Chinese (zh)
Inventor
勒秋娜
张斌
刘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Fuyouduo Technology Co ltd
Original Assignee
Nanchang Fuyouduo Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Fuyouduo Technology Co ltd filed Critical Nanchang Fuyouduo Technology Co ltd
Priority to CN202010892557.6A priority Critical patent/CN111935467A/en
Publication of CN111935467A publication Critical patent/CN111935467A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3141Constructional details thereof
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234309Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4 or from Quicktime to Realvideo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention discloses an external projection device for virtual reality education and teaching, which comprises a processing terminal, a processing terminal and a GPU mark, wherein the processing terminal comprises a processing terminal IP, a PORT and a C processing terminal; the scheduling scheme makes full use of the code stream of each stage, can realize that other stage streams are quoted to generate the code stream type required by the subsequent stage, or the stage streams are distributed among different servers, and then makes full use of the characteristics of the servers to process the specific stage streams, such as GPU servers, and can specifically process the decoding and encoding processes. This has the benefit of reducing the effort of re-generation of certain phase streams, conserving server performance, and allowing specific phase streams to be processed by specific servers, increasing efficiency.

Description

Outer projection arrangement of virtual reality education and teaching
Technical Field
The invention relates to the VR field, in particular to an external projection device for virtual reality education and teaching.
Background
In the visual image acquisition industry, the distribution and transcoding functions of a traditional streaming media server are separately deployed, and in a traditional scene, the code streams of cameras are unified, and the popular trend is also unified. With the development of the visual image capturing industry, a plurality of generations of cameras have appeared, the coding types are varied from mpeg2/mpeg4 and H264 to H265, and the coding packaging types required by the terminal playing device are also varied, which brings great docking troubles to platform integrators. For an integrator, integrating the code stream into a unified type output becomes necessary work.
In the prior art, a transcoding server is built, a scheduling module is used to find a server with the best state, and an input stream is transcoded and output to a front-end device for playing or distribution. However, different terminals require different types of code streams, for example, a PC terminal requires a PS type, a web page terminal requires an SRTP type, and at this time, the transcoding server needs to perform repeated transcoding on the same code stream to form different types, which results in wasted server performance. This is because transcoding is a multi-stage process, such as decoding, filtering, encoding, etc., and not only consists of input and output, but the existing scheduling schemes are all comprehensively considered for matching the input and output stages and server performance, and for streams in other stages, streams cannot be utilized or shared among servers, which causes great waste.
The existing scheduling schemes include DNS scheduling based on a flow request, LVS scheduling based on a network layer load, nginx scheduling based on an application layer, and distribution scheduling based on a code stream input/output stage, which are all schemes for balancing server loads or selecting a server with the best performance. Transcoding of code stream is a process of processing one path of stream to generate multi-stage stream, and the sequence is original stream, decapsulation, decoding, filtering, encoding and encapsulation. In the scheduling policy, not only the actual load capacity of the server but also the service correlation between a new request and an existing request, for example, a phase stream referencing the existing request, for example, a decoding phase stream, is considered to generate an encoded code stream required by the new request. A scheduling scheme that comprehensively considers such service correlation and server performance of each phase stream of a code stream is not available at present.
Disclosure of Invention
The invention aims to overcome the defects of the above situation and provide a technical scheme capable of solving the problems, and through the scheduling scheme, each stage stream of one path of stream can be fully utilized, the transcoding efficiency is improved, the server performance is saved, the server characteristic can be better utilized, and a special server can generate the code stream of a specific stage.
The utility model provides an outer projection arrangement of virtual reality education and teaching, specifically includes following step:
s1, configuring information of the processing end in the configuration file, including the processing end server IP, the processing end monitoring port, and the server characteristics, such as performance parameters, extranet, GPU running state, etc.;
s2, the strategy module actively connects with the processing terminal through the tcp protocol, sends the check protocol, and obtains the basic information of the processing terminal server;
s3, when a superior notify request comes, the strategy module establishes a tree structure according to the source information of notify and stores the source information; judging the processing flow of the source code stream according to the target information, wherein the processing flow comprises one or more flows of decapsulation, decoding, filtering, coding and encapsulation, and storing the stage flows to a source information tree;
s4, searching one or more servers with best performance in all processing terminals, forwarding the notify request, informing which phase streams need to be generated, and keeping the phase streams completely consistent with the strategy module, wherein the reason for searching one or more servers is that a plurality of servers process different phases and then are integrated and output according to the characteristics of the servers, such as extranets, GPUs and the like, which are optimized;
s5, collecting the response time of the processing end as a judgment condition for selecting the processing end. If the server is overtime, special records are made, and alarm feedback is triggered;
s6, the processing end will detect the original code stream, if it is not matched with the strategy module, it will push the protocol to the strategy module, and the strategy will modify the source information;
s7, the superior notify request comes, according to the source id, searches the source information tree, according to the destination information of notify, searches the quote stage of the request, generates the destination flow, if the destination flow is generated for the first time, the parameter type of the stage flow is formed into the index, stored in the source information tree. If the source information tree is not found according to the source id, repeating the steps S3 to S6;
s8, the superior delete request comes, the strategy module deletes the reference count of the conversation, checks the reference count of each stage flow of the original flow, if all the reference counts are 0, the strategy module deletes the flow tree information structure, and issues the delete request to the processing end, deletes all the stage flows of the flow.
An outer projection system for virtual reality education and teaching comprises a superior level, a strategy end and a processing end, wherein a strategy module runs a processing process for searching the processing end and searching the processing end, and information of a source resource tree is updated and a generation request is sent to the processing end; the processing end is composed of a plurality of servers with video processing.
The utility model provides a hardware of outer projection of virtual reality education and teaching, is including being used for running the outer projection arrangement of virtual reality education and teaching and the computer hardware equipment and the network equipment of the outer projection system of virtual reality education and teaching.
The invention has the beneficial effects that: because the existing scheduling scheme performs matching distribution on the original stream and the encapsulated stream, the streams at the stages of decapsulation, decoding, filtering and encoding are not fully utilized, which is a waste. The scheduling scheme makes full use of the code stream of each stage, can realize that other stage streams are introduced to generate the code stream type required by the subsequent stage, or the stage streams are distributed among different servers, and then makes full use of the characteristics of the servers to process specific stage streams, such as GPU servers, and can specifically process decoding and encoding processes. This has the benefit of reducing the effort of re-generation of certain phase streams, conserving server performance, and allowing specific phase streams to be processed by specific servers, increasing efficiency.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Detailed Description
In the embodiment of the invention, the designed scheme is to access the audios and videos of a plurality of subways, and the audios and videos are illustrated according to the videos accessed to one line. The video source has three encoding formats of H264, MPEG4 and MPEG2, and the output encoding type requires H264, so transcoding is required. There are two types of PC end playing and web page playing, and different packaging formats need to be output. The platform deploys 3 processing ports A, B, C, where C is the GPU server, one policy module schedule.
Firstly, configuring the processing end information in a configuration file, wherein the processing end information comprises a processing end IP, a PORT and a GPU mark of a C processing end.
And starting a strategy module, reading the configuration file by the strategy module, and acquiring the IP and the port of the processing end A/B/C. And connecting the processing end, establishing a connection by using each process of the strategy server and the processing end, storing the connections by using the strategy module, sending heartbeat information at regular time, and keeping long connection. The processing end is used for multi-process processing, and establishes a connection with each process, so as to balance the processing capacity of the processes and make the performance of the processes maximally utilized.
The strategy module sends a check protocol to the processing end, which is a private protocol, and the purpose obtains global information such as cpu utilization, cpu core number, memory utilization, network card performance, network uplink/downlink flow and the like of the processing end, and information such as cpu utilization, process connection number and the like of each process. The strategy selects the server with the best performance, and the judgment is made according to the basic information of the servers.
The strategy module is provided with a global red and black tree, and stores the basic information of the source according to the unique id of the video source as an index. The basic information of the source includes corresponding processing ends of an original stream, an access decapsulated stream, a decoded stream, a filter stream, a coded stream and an encapsulated stream, and reference count of each stage stream, and it is ensured in principle that each stream is generated only once, the original stream, the decapsulated stream and the decoded stream can only be of one type, and the filter, the code and the encapsulation can be of multiple types, for example, the code may be H264, H265, etc., therefore, when the stage streams of the filter, the code and the encapsulation are saved, three independent red-black trees need to be generated, and indexes are taken according to corresponding filter parameters, coding parameters and encapsulation parameters. This is the basic data structure of the policy module.
The operation of the strategy module is mainly divided into four stages, and the strategy module is operated successively. The first stage is to search the processing end, the second stage is to search the processing progress of the processing end, the third stage is to update the information of the source resource tree, and the fourth stage is to generate a request to be sent to the processing end.
In the first stage, the processing end is searched. Firstly, searching on a global red and black tree according to a source unique id, then calculating a final outflow stage according to destination information of a notify protocol, wherein a search index is constructed mainly according to the destination information of the notify protocol, a reference stage is searched from a resource tree, a processing end for processing the stage flow is obtained according to the reference stage information, for example, a notify request has filter parameters, the request needs four processes of decoding, filtering, encoding and packaging, the final packaging parameter index is searched in packaging red and black of the resource tree, if the request has the filter parameters, the search is finished, the packaging flow is referenced, otherwise, the coding red and black tree, the filter red and black tree, the decoding stage, the decapsulating stage and the original flow are sequentially searched for the reference stage, and therefore, the minimum reference stage is the original flow. And then judging the current performance of the processing end, if the performance does not meet the condition, sending the stage flow to other servers for processing, and referring the flow which is referred to the other servers as an external reference flow, wherein the condition can be generated only under the condition that the processing end cannot process the flow. The external reference stream also has an application, the access can only be once for the hot video, a large number of requests are input, the strategy module distributes the internal of the stream, and then the stream is sent to different terminals by different processing terminals. If transcoding processing exists in the analyzed notify destination information, a GPU server is preferentially selected for processing, the transcoding processing is accessed by a processing end A, the transcoding processing is internally forwarded to a processing end C (GPU), the stream is processed to a coding stage, then the stream is sent to a server A or a server B, a packaging stream is generated, and the stream is sent to a terminal. The A/B server processes an input/output stage, the high broadband characteristic of the server is utilized to mainly perform distribution processing, the C processing end is responsible for a transcoding function, and the GPU characteristic is utilized to efficiently transcode. If the external network request exists, the external network processing terminal is required to be selected for processing, and the requirement is not met in the project.
Formula for judging server performance:
Figure 876553DEST_PATH_IMAGE001
performance index parameter of processor
I, a rejection condition, wherein 0 represents that the rejection condition is satisfied, and 1 represents that the rejection condition is not satisfied, such as an extranet server;
s (i) weight;
x (i) attribute value, cpu utilization rate, memory utilization rate, etc.;
pre: priority, the priority value is Base, the non-priority value is 0;
base: the Base number is the same order of magnitude, when the Base number is 0-1, the Base number is 1, and when the Base number is 0-100, the Base number is 100;
and in the second stage, on the basis of selecting one or more processing terminals in the first stage, a processing process is selected for the processing terminals. When selecting the process, the cpu utilization rate and the network connection number ratio (the current network connection number is divided by the maximum connection number of a single process) of the process are mainly referred to. The strategy module respectively sets a weight for the two parts, and the concept of the weight is to represent the proportion of the final performance value. For example, the cpu usage is weighted to 70 and the network connection number ratio is 30, and we consider that cpu accounts for 70% of the process selection. The value of the weight is not fixed, and during the use of the project, information is collected and then the appropriate weight is analyzed.
And in the third stage, the information of the source resource tree is updated. From the destination information in notify, it can be calculated which phases the request needs to refer to and which phases will be generated, and the information needs to be saved. If the source is a new source, a red-black tree of the source needs to be created and initialized, and a filter red-black tree, a coding red-black tree and a packaging red-black tree of the source are created at the same time for storing information of the generated stream of each stage of the source. And updating the resource tree information, namely inserting the newly generated phase flow in the request into the position of the corresponding phase of the source information tree on one hand, and updating the reference relation and the counting of the phase reference on the other hand. This phase keeps information relating to this source in order to find the reference phase in the first phase.
And in the fourth stage, generating a request and sending the request to the processing end. At this time, the main function of the policy module is already finished, and the task of this stage is to send a protocol request for the selected processing end process, and the content of the request is basically consistent with the notify protocol sent by the upper level to the policy module. The policy module is added with the phase information quoted by the request, and can find the quoted phase flow according to the information and inform the request of which phase flows need to be generated.
The last part is deletion, and after receiving a delete command of the upper stage, the strategy module judges whether the session is deleted according to the reference stage and the output stage of the session. When the reference counts of the output phases are all 0, the session may be deleted.
The above description is a specific implementation flow of the policy module, that is, an implementation flow of the scheduling scheme.
An outer projection system for virtual reality education and teaching comprises a superior level, a strategy end and a processing end, wherein a strategy module runs a processing process for searching the processing end and searching the processing end, and information of a source resource tree is updated and a generation request is sent to the processing end; the processing end is composed of a plurality of servers with video processing.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.

Claims (9)

1. An outer projection device for virtual reality education and teaching is characterized in that a measuring method specifically comprises the following steps:
s1, configuring information of the processing end in the configuration file, wherein the information comprises the IP of the processing end server, the monitoring port of the processing end and the server characteristics;
s2, the strategy module actively connects with the processing terminal through the tcp protocol, sends the check protocol, and obtains the basic information of the processing terminal server;
s3, when a superior notify request comes, the strategy module establishes a tree structure according to the source information of notify and stores the source information; judging the processing flow of the source code stream according to the target information, and storing the stage streams to a source information tree;
s4, searching one or more servers with the best performance in all processing terminals, forwarding the notify request of the step S3 to the server, informing which phase flows need to be generated, and keeping the phase flows completely consistent with the strategy module;
s5, collecting the response time of the processing end as a judgment condition for selecting the processing end;
s6, the processing end detects the original code stream, when the original code stream is not matched with the strategy module, the processing end pushes the protocol to the strategy module, and the strategy modifies the source information;
s7, a superior notify request comes, a source information tree is searched according to a source id, a quoting stage of the request is searched according to the destination information of notify, and a destination stream is generated;
s8, the superior delete request comes, the strategy module deletes the reference count of the conversation, checks the reference count of each stage flow of the original flow, if all the reference counts are 0, the strategy module deletes the flow tree information structure, and issues the delete request to the processing end, deletes all the stage flows of the flow.
2. The external projection device for virtual reality education and teaching of claim 1, wherein the processing flow of the source code stream in step S3 includes one or more of decapsulation, decoding, filtering, encoding, and encapsulation.
3. The external projection device for virtual reality educational teaching of claim 1, wherein step S5, when the server times out, further comprises triggering an alarm feedback.
4. An outer projection system for virtual reality education and teaching comprises a superior level, a strategy end and a processing end, wherein a strategy module runs a processing process for searching the processing end and searching the processing end, and information of a source resource tree is updated and a generation request is sent to the processing end; the processing end is composed of a plurality of servers with video processing.
5. The utility model provides a hardware of outer projection of virtual reality education and teaching which characterized in that, is used for the outer projection arrangement of operation virtual reality education and teaching and the computer hardware equipment and the network equipment of the outer projection system of virtual reality education and teaching.
6. The external projection device for virtual reality educational education according to claim 1 wherein a special record is made when the server times out in step S5.
7. The external projection apparatus for virtual reality educational education according to claim 1 wherein, in step S7, if the source information tree is not found according to the source id, the steps S3 to S6 are repeated.
8. The external projection apparatus for virtual reality educational education according to claim 1 wherein, in step S7, if the destination stream is generated for the first time, the parameter types of the stage stream are combined into an index and the stored source information tree.
9. The external projection device for virtual reality education and teaching of claim 1, wherein in step S2, the basic information of the processing end server includes performance parameters, extranet, and GPU operating status.
CN202010892557.6A 2020-08-31 2020-08-31 Outer projection arrangement of virtual reality education and teaching Pending CN111935467A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010892557.6A CN111935467A (en) 2020-08-31 2020-08-31 Outer projection arrangement of virtual reality education and teaching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010892557.6A CN111935467A (en) 2020-08-31 2020-08-31 Outer projection arrangement of virtual reality education and teaching

Publications (1)

Publication Number Publication Date
CN111935467A true CN111935467A (en) 2020-11-13

Family

ID=73310134

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010892557.6A Pending CN111935467A (en) 2020-08-31 2020-08-31 Outer projection arrangement of virtual reality education and teaching

Country Status (1)

Country Link
CN (1) CN111935467A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6581102B1 (en) * 1999-05-27 2003-06-17 International Business Machines Corporation System and method for integrating arbitrary isochronous processing algorithms in general media processing systems
CN101707543A (en) * 2009-11-30 2010-05-12 北京中科大洋科技发展股份有限公司 Enterprise media bus system supporting multi-task type and enterprise media bus method supporting multi-task type
CN102123279A (en) * 2010-12-28 2011-07-13 乐视网信息技术(北京)股份有限公司 Distributed real-time transcoding method and system
CN106154707A (en) * 2016-08-29 2016-11-23 广州大西洲科技有限公司 Virtual reality projection imaging method and system
CN109213593A (en) * 2017-07-04 2019-01-15 阿里巴巴集团控股有限公司 Resource allocation methods, device and equipment for panoramic video transcoding
CN110868610A (en) * 2019-10-25 2020-03-06 富盛科技股份有限公司 Streaming media transmission method and device and server
CN111613234A (en) * 2020-05-29 2020-09-01 富盛科技股份有限公司 Multi-stage flow scheduling method, system and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6581102B1 (en) * 1999-05-27 2003-06-17 International Business Machines Corporation System and method for integrating arbitrary isochronous processing algorithms in general media processing systems
CN101707543A (en) * 2009-11-30 2010-05-12 北京中科大洋科技发展股份有限公司 Enterprise media bus system supporting multi-task type and enterprise media bus method supporting multi-task type
CN102123279A (en) * 2010-12-28 2011-07-13 乐视网信息技术(北京)股份有限公司 Distributed real-time transcoding method and system
CN106154707A (en) * 2016-08-29 2016-11-23 广州大西洲科技有限公司 Virtual reality projection imaging method and system
CN109213593A (en) * 2017-07-04 2019-01-15 阿里巴巴集团控股有限公司 Resource allocation methods, device and equipment for panoramic video transcoding
CN110868610A (en) * 2019-10-25 2020-03-06 富盛科技股份有限公司 Streaming media transmission method and device and server
CN111613234A (en) * 2020-05-29 2020-09-01 富盛科技股份有限公司 Multi-stage flow scheduling method, system and device

Similar Documents

Publication Publication Date Title
US8533597B2 (en) Strategies for configuring media processing functionality using a hierarchical ordering of control parameters
CN103957341B (en) The method of picture transfer and relevant device thereof
CN111613234B (en) Multi-stage flow scheduling method, system and device
WO2012149296A2 (en) Providing content aware video adaptation
CN105262825A (en) SPICE cloud desktop transporting and displaying method and system on the basis of H.265 algorithm
CN101977218A (en) Internet playing file transcoding method and system
US11089334B1 (en) Methods and systems for maintaining quality of experience in real-time live video streaming
CN104144349A (en) SPICE video coding and decoding expansion method and system based on H264
US8855193B2 (en) Image processing apparatus and method for converting divisional code streams into packets using header information
WO2023207119A1 (en) Immersive media processing method and apparatus, device, and storage medium
Bouaafia et al. Deep learning-based video quality enhancement for the new versatile video coding
CN108632679B (en) A kind of method that multi-medium data transmits and a kind of view networked terminals
CN112104867A (en) Video processing method, video processing device, intelligent equipment and storage medium
US8681860B2 (en) Moving picture compression apparatus and method of controlling operation of same
Zakerinasab et al. Dependency-aware distributed video transcoding in the cloud
CN109302384B (en) Data processing method and system
CN104469259A (en) Cloud terminal video synthesis method and system
US20240080487A1 (en) Method, apparatus for processing media data, computer device and storage medium
CN111935467A (en) Outer projection arrangement of virtual reality education and teaching
CN114079749A (en) Cross-platform system for field of intelligent production business of manufacturing industry
WO2003098930A1 (en) Information processing device and method, recording medium, and program
JP2002532996A (en) Web-based video editing method and system
CN114598834A (en) Video processing method and device, electronic equipment and readable storage medium
US12022088B2 (en) Method and apparatus for constructing motion information list in video encoding and decoding and device
WO2023130893A1 (en) Streaming media based transmission method and apparatus, electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20201113

WD01 Invention patent application deemed withdrawn after publication