CN117014560A - Video scheduling method, device, medium and equipment - Google Patents
Video scheduling method, device, medium and equipment Download PDFInfo
- Publication number
- CN117014560A CN117014560A CN202310995404.8A CN202310995404A CN117014560A CN 117014560 A CN117014560 A CN 117014560A CN 202310995404 A CN202310995404 A CN 202310995404A CN 117014560 A CN117014560 A CN 117014560A
- Authority
- CN
- China
- Prior art keywords
- image
- displaying
- target object
- acquiring
- relative position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000004590 computer program Methods 0.000 claims description 16
- 238000010586 diagram Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 9
- 230000006872 improvement Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 6
- 238000012545 processing Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 229920001296 polysiloxane Polymers 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 239000010979 ruby Substances 0.000 description 1
- 229910001750 ruby Inorganic materials 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
Abstract
The application discloses a video scheduling method, a video scheduling device, a video scheduling medium and video scheduling equipment. And determining the relative position of the second image and the first image, finally displaying the first image, and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
Description
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, a medium, and a device for video scheduling.
Background
In life, various information is required to be acquired by using a camera, and when the information is acquired by using the camera, a user is required to actively call the camera containing the target object to observe the target object, and meanwhile, in order to better observe the moving track of the target object, the camera with a larger visual angle is required to observe the target object, so that the acquired video image information is often lower in definition and cannot accurately acquire the information of the target object.
Therefore, how to improve the definition of the image related to the object under the condition of ensuring that the image with a large viewing angle is shot is a problem to be solved.
Disclosure of Invention
The application provides a video scheduling method, a video scheduling device, a video scheduling medium and video scheduling equipment, which are used for partially solving the technical problem of improving the definition of an image related to a target object under the condition of ensuring shooting of a large-view-angle image.
In a first aspect, the present application provides a method of video scheduling, the method comprising:
acquiring an image acquired by a main camera related to a target object as a first image, and acquiring an image acquired by at least one auxiliary camera related to at least part of the target object as a second image;
determining a relative position of the first image and the second image;
and displaying the first image and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
Optionally, before acquiring the image acquired by the main camera related to the target object, the method further comprises:
acquiring a map containing all the main cameras;
and marking all the main cameras in the map, and sequentially connecting all the main cameras to obtain an acquisition route.
Optionally, obtaining a map including each main camera specifically includes:
and acquiring the URL address of the map containing each main camera, so as to acquire the map according to the URL address.
Optionally, displaying the first image and displaying the second image according to the relative position specifically includes:
displaying the first image and displaying the second image according to the relative position;
determining an image of the next target object according to the connection sequence of the main cameras, and taking the image as a third image;
the third image is shown.
Optionally, the method further comprises:
and acquiring and displaying the road information of the target object.
Optionally, acquiring an image acquired by a main camera related to the target object as a first image, and acquiring an image acquired by at least one auxiliary camera related to at least part of the target object as a second image specifically includes:
acquiring a high-point camera related to a target object, and taking an image acquired by the high-point camera as a first image;
and acquiring an image acquired by at least one near-point camera related to at least part of the target object as a second image.
Optionally, before displaying the first image and displaying the second image according to the relative position, the method further comprises:
distributing each streaming media node to different servers;
displaying the first image and displaying the second image according to the relative position, wherein the method specifically comprises the following steps:
and displaying the first image, displaying the second image according to the relative position, and uploading the second image by a plurality of servers.
In a second aspect, the present application provides an apparatus for video scheduling, including:
the acquisition module is used for acquiring an image acquired by the main camera related to the target object as a first image and acquiring an image acquired by at least one auxiliary camera related to at least part of the target object as a second image;
a determining module for determining a relative position of the first image and the second image;
and the display module is used for displaying the first image and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
In a third aspect, the present application provides an electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing a method of video scheduling as provided in the first aspect when executing the program.
In a fourth aspect, the present application provides a non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method of video scheduling as provided in the first aspect.
The at least one technical scheme adopted by the application can achieve the following beneficial effects:
the application provides a video scheduling method, which comprises the steps of firstly acquiring an image acquired by a main camera related to a target object as a first image, and acquiring an image acquired by at least one auxiliary camera related to at least part of the target object as a second image. And determining the relative position of the second image and the first image, finally displaying the first image, and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
According to the method, the main camera can be utilized to acquire the large-view-angle image of the target object, and the clear image acquired by the auxiliary camera covers part of the large-view-angle image so as to display the clear image in the large-view-angle image. Therefore, under the condition of shooting a large-view-angle image, the definition of the image related to the target object can be improved, and further more details of the target object can be acquired.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
fig. 1 is a schematic flow chart of a video scheduling method provided in the present application;
fig. 2 is a schematic diagram of an apparatus for video scheduling according to the present application;
fig. 3 is a schematic diagram of an electronic device corresponding to fig. 1 according to the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions of the present application will be clearly and completely described below with reference to specific embodiments of the present application and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following describes in detail the technical solutions provided by the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a flow chart of a video scheduling method provided in the present application, which includes the following steps:
s101: an image acquired by a primary camera relating to a target object is acquired as a first image, and an image acquired by at least one secondary camera relating to at least part of the target object is acquired as a second image.
From the above description, the core innovation point of the scheme is that in the large-view-angle image, the image acquired by the auxiliary camera is used for improving the definition of the image related to the target object. Therefore, the execution subject for implementing the video scheduling method provided in the present specification may be a client provided in a terminal device such as a desktop computer or a notebook computer used by a user, and for convenience of description, only the client is used as the execution subject of the video scheduling method provided in the present application.
First, the client may acquire an image acquired by a main camera related to the target object as a first image. The object may be one object, or may be a plurality of objects, or may be a scene formed by a plurality of objects. Because the camera at the higher position can obtain a wider view angle, the camera is hidden in the specification, and the main camera can be a high-point camera, namely, a camera at the higher position.
At the same time, the client may acquire an image acquired by at least one auxiliary camera related to at least part of the target object as a second image. The auxiliary camera may be a near-point camera, i.e. a camera with no larger viewing angle, but higher definition of the photographed image.
In order to more conveniently acquire each image, the client may acquire, from the internet, a map including each object through the server, and then send the URL address of the map to the client through the server. Of course, the above flow may also be completed by the client alone, and the setting may be performed in a more practical situation.
After the client acquires the map containing the objects, the client can mark the objects in the map, sequentially connect the objects according to the sequence, thereby acquiring an acquisition route, and sequentially select the objects according to the acquisition route. It should be noted that, the sequence here may be a sequence in which the moving track of the target object in the target objects relates to the target object.
In practical application, the acquisition route can be selected first, then the target object can be determined according to each camera related to the acquisition route, or the target objects can be determined first and connected to determine the acquisition route. The specific case may be set according to the actual use.
S102: a relative position of the first image and the second image is determined.
The client may determine, after acquiring the first image and the second image, a relative position of the first image and the second image, where the relative position is a position of a scene related to the second image in the first image.
S103: and displaying the first image and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
The client can display the first image after acquiring the relative position of the first image and the second image, and display the second image according to the relative position, wherein the second image is positioned on the first image. That is, the second image having higher definition covers the portion corresponding to the image having lower definition in the first image.
In this specification, the client may display the first image and the second image, and at the same time, display, according to the connection sequence of each target object, an image acquired by the main camera of the next target object as the third image.
Likewise, the client may display the road information of the target object while displaying the first image and the second image. The road information may include: the degree of congestion of the road, the temperature of the environment, the illumination intensity and the like.
It should be noted that, when the client side displays the first image and the second image, the client side may push the currently displayed frame to the plurality of devices. In the process of pushing, the client can distribute each streaming media node to different servers, and the servers are commonly responsible for uploading, and the servers with smaller loads can be preferentially limited for uploading so as to balance the loads of the servers.
An embodiment provided by the present application is described below.
Firstly, configuring a corresponding application platform at a client, and simultaneously configuring parameters such as a user, a service, equipment and the like. And deploying streaming media nodes on a plurality of servers corresponding to the application platform, and acquiring URL addresses of the maps from the servers.
The client can open the map according to the URL address, mark each object in the map by the user, and automatically connect each object by the client to determine the connection sequence of each object.
And determining the display sequence of the images acquired by the main cameras, the auxiliary cameras and the cameras according to the connection sequence and the targets.
The client can display the image acquired by the corresponding main camera and the image acquired by the corresponding auxiliary camera according to the moving position of the target object. The displaying may be performed by displaying the second image on the first image according to the relative position of the first image and the second image.
According to the method, the main camera can be utilized to acquire the large-view-angle image of the target object, and the clear image acquired by the auxiliary camera covers part of the large-view-angle image so as to display the clear image in the large-view-angle image. Therefore, under the condition of shooting a large-view-angle image, the definition of the image related to the target object can be improved, and further more details of the target object can be acquired.
The above method for video scheduling provided for one or more embodiments of the present application is based on the same idea, and the present application further provides a corresponding apparatus for video scheduling, as shown in fig. 2.
Fig. 2 is a schematic diagram of an apparatus for video scheduling according to the present application, including:
an acquisition module 201, configured to acquire, as a first image, an image acquired by a main camera related to a target object, and acquire, as a second image, an image acquired by at least one auxiliary camera related to at least part of the target object;
a determining module 202, configured to determine a relative position of the first image and the second image;
and the display module 203 is configured to display the first image and display the second image according to the relative position, where the second image is located above the first image.
Optionally, the obtaining module 201 is specifically configured to obtain a map including each main camera; and marking all the main cameras in the map, and sequentially connecting all the main cameras to obtain an acquisition route.
Optionally, the obtaining module 201 is specifically configured to obtain a URL address of a map including each primary camera, so as to obtain the map according to the URL address.
Optionally, the display module 203 is specifically configured to display the first image and display the second image according to the relative position; determining an image of the next target object according to the connection sequence of the main cameras, and taking the image as a third image; the third image is shown.
Optionally, the display module 203 is specifically configured to acquire and display the road information of the target object.
Optionally, the acquiring module 201 is specifically configured to acquire a high-point camera related to the target object, and take an image acquired by the high-point camera as a first image; and acquiring an image acquired by at least one near-point camera related to at least part of the target object as a second image.
Optionally, the display module 203 is further configured to distribute each streaming media node to different servers; and displaying the first image, displaying the second image according to the relative position, and uploading the second image by a plurality of servers.
The present application also provides a computer readable medium storing a computer program, where the computer program is configured to perform the method for scheduling sub-videos based on the same account as provided in fig. 1.
The application also provides a schematic block diagram of the electronic device shown in fig. 3, which corresponds to fig. 1. At the hardware level, the electronic device includes a processor, an internal bus, a network interface, a memory, and a non-volatile storage, as described in fig. 3, although other hardware required by other services may be included. The processor reads the corresponding computer program from the non-volatile memory into the memory and then runs to implement a video scheduling method as described above with respect to fig. 1. Of course, other implementations, such as logic devices or combinations of hardware and software, are not excluded from the present application, that is, the execution subject of the following processing flows is not limited to each logic unit, but may be hardware or logic devices.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (Field Programmable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (Advanced Boolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (Java Hardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application SpecificIntegrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in the same piece or pieces of software and/or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable media (including but not limited to disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer media including memory storage devices.
The embodiments of the present application are described in a progressive manner, and the same and similar parts of the embodiments are all referred to each other, and each embodiment is mainly described in the differences from the other embodiments. In particular, for system embodiments, since they are substantially similar to method embodiments, the description is relatively simple, as relevant to see a section of the description of method embodiments.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and variations of the present application will be apparent to those skilled in the art. Any modification, equivalent replacement, improvement, etc. which come within the spirit and principles of the application are to be included in the scope of the claims of the present application.
Claims (10)
1. A method of video scheduling, the method comprising:
acquiring an image acquired by a main camera related to a target object as a first image, and acquiring an image acquired by at least one auxiliary camera related to at least part of the target object as a second image;
determining a relative position of the first image and the second image;
and displaying the first image and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
2. The method of claim 1, wherein prior to acquiring the image acquired by the primary camera relating to the target object, the method further comprises:
acquiring a map containing all the main cameras;
and marking all the main cameras in the map, and sequentially connecting all the main cameras to obtain an acquisition route.
3. The method according to claim 2, wherein obtaining a map including each primary camera, comprises: and acquiring the URL address of the map containing each main camera, so as to acquire the map according to the URL address.
4. The method according to claim 2, characterized in that displaying the first image and displaying the second image according to the relative position, in particular comprises:
displaying the first image and displaying the second image according to the relative position;
determining an image of the next target object according to the connection sequence of the main cameras, and taking the image as a third image;
the third image is shown.
5. The method according to claim 1, wherein the method further comprises:
and acquiring and displaying the road information of the target object.
6. Method according to claim 1, characterized in that the acquisition of an image acquired by a primary camera related to a target object as a first image and the acquisition of an image acquired by at least one secondary camera related to at least part of said target object as a second image, in particular comprises:
acquiring a high-point camera related to a target object, and taking an image acquired by the high-point camera as a first image;
and acquiring an image acquired by at least one near-point camera related to at least part of the target object as a second image.
7. The method of claim 1, wherein prior to displaying the first image and displaying the second image according to the relative position, the method further comprises:
distributing each streaming media node to different servers;
displaying the first image and displaying the second image according to the relative position, wherein the method specifically comprises the following steps:
and displaying the first image, displaying the second image according to the relative position, and uploading the second image by a plurality of servers.
8. An apparatus for video scheduling, comprising:
the acquisition module is used for acquiring an image acquired by the main camera related to the target object as a first image and acquiring an image acquired by at least one auxiliary camera related to at least part of the target object as a second image;
a determining module for determining a relative position of the first image and the second image;
and the display module is used for displaying the first image and displaying the second image according to the relative position, wherein the second image is positioned above the first image.
9. A computer-readable storage medium, characterized in that the storage medium stores a computer program which, when executed by a processor, implements the method of any of the preceding claims 1-7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of the preceding claims 1-7 when executing the program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310995404.8A CN117014560A (en) | 2023-08-09 | 2023-08-09 | Video scheduling method, device, medium and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310995404.8A CN117014560A (en) | 2023-08-09 | 2023-08-09 | Video scheduling method, device, medium and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117014560A true CN117014560A (en) | 2023-11-07 |
Family
ID=88563300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310995404.8A Pending CN117014560A (en) | 2023-08-09 | 2023-08-09 | Video scheduling method, device, medium and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117014560A (en) |
-
2023
- 2023-08-09 CN CN202310995404.8A patent/CN117014560A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111238450B (en) | Visual positioning method and device | |
CN112364277B (en) | Webpage loading method and device | |
CN110162089B (en) | Unmanned driving simulation method and device | |
CN108268289B (en) | Parameter configuration method, device and system for web application | |
CN108848244B (en) | Page display method and device | |
CN110806847A (en) | Distributed multi-screen display method, device, equipment and system | |
CN115828162B (en) | Classification model training method and device, storage medium and electronic equipment | |
CN116309823A (en) | Pose determining method, pose determining device, pose determining equipment and storage medium | |
CN113672323A (en) | Page display method and device | |
CN110427237B (en) | Method and device for cross-application access to target page and electronic equipment | |
CN113674424B (en) | Method and device for drawing electronic map | |
CN113888415B (en) | Model training and image restoration method and device | |
CN111144979A (en) | Data processing method and device | |
CN111144980A (en) | Commodity identification method and device | |
CN116245051A (en) | Simulation software rendering method and device, storage medium and electronic equipment | |
CN111881393A (en) | Page rendering method, device, medium and electronic equipment | |
CN111538667A (en) | Page testing method and device | |
CN117014560A (en) | Video scheduling method, device, medium and equipment | |
CN111338655A (en) | Installation package distribution method and system | |
CN117152040B (en) | Point cloud fusion method and device based on depth map | |
CN117173321B (en) | Method and device for selecting three-dimensional reconstruction texture view | |
CN110008358A (en) | A kind of resource information methods of exhibiting and system, client and server-side | |
CN117348999B (en) | Service execution system and service execution method | |
CN116740182B (en) | Ghost area determining method and device, storage medium and electronic equipment | |
CN112698882A (en) | Page component loading method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |