CN115103125A - Broadcasting directing method and device - Google Patents
Broadcasting directing method and device Download PDFInfo
- Publication number
- CN115103125A CN115103125A CN202210826557.5A CN202210826557A CN115103125A CN 115103125 A CN115103125 A CN 115103125A CN 202210826557 A CN202210826557 A CN 202210826557A CN 115103125 A CN115103125 A CN 115103125A
- Authority
- CN
- China
- Prior art keywords
- video
- global
- scene
- local
- matched
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000004927 fusion Effects 0.000 claims abstract description 44
- 230000001131 transforming effect Effects 0.000 claims abstract description 16
- 230000009466 transformation Effects 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 19
- 238000001914 filtration Methods 0.000 claims description 6
- 238000005070 sampling Methods 0.000 claims description 4
- 230000008859 change Effects 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 19
- 230000008569 process Effects 0.000 description 9
- 238000004891 communication Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 239000011159 matrix material Substances 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 230000000007 visual effect Effects 0.000 description 5
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000003993 interaction Effects 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 238000009499 grossing Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000000712 assembly Effects 0.000 description 1
- 238000000429 assembly Methods 0.000 description 1
- 238000011217 control strategy Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000011982 device technology Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4084—Scaling of whole images or parts thereof, e.g. expanding or contracting in the transform domain, e.g. fast Fourier transform [FFT] domain scaling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/262—Analysis of motion using transform domain methods, e.g. Fourier domain methods
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23412—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs for generating or manipulating the scene composition of objects, e.g. MPEG-4 objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/23424—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D30/00—Reducing energy consumption in communication networks
- Y02D30/70—Reducing energy consumption in communication networks in wireless communication networks
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Business, Economics & Management (AREA)
- Marketing (AREA)
- Mathematical Physics (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Image Analysis (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The application provides a broadcasting guide method and a broadcasting guide device, wherein the broadcasting guide method comprises the following steps: acquiring a global video; based on an instruction, searching a scene matched with the instruction; acquiring a local video matched with the scene based on the scene; transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene; and playing the global fusion video in a first window, and playing the local video in a second window. The method and the device aim to solve the technical problem that the processing speed and the processing efficiency of the director video are low in the prior art.
Description
Technical Field
The present application relates to the field of computer device technologies, and in particular, to a broadcast directing method and apparatus.
Background
With the improvement of computer computing power and camera resolution and visual field, people have higher and higher requirements on image video and quality, and hope to obtain a high-resolution panoramic image, so that the panoramic image can obtain a wider visual field without losing detailed information of the image and the video. Then, a global camera is used to capture a global video, and a local camera is used to capture a local video. The global video has low pixels and cannot capture local details; the local video pixels are high but cannot assume their position in the global video. In the prior art, in order to present the position of a certain local video in a global video, all local videos and the global video are generally subjected to fusion processing, which results in large processing amount of the video and low processing efficiency.
Disclosure of Invention
The embodiment of the invention provides a method and a device for directing broadcasting, and aims to solve the technical problem that the directing video processing speed and efficiency are low in the prior art.
The invention provides a broadcasting guide method, which comprises the following steps:
acquiring a global video;
based on an instruction, searching a scene matched with the instruction in the global video;
acquiring a local video matched with the scene based on the scene;
transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene;
and playing the global fusion video in a first window, and playing the local video in a second window.
Optionally, the step of transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene specifically includes: finding out a corresponding block of the local video in the global video by adopting a block matching algorithm; registering the local video and the global video based on the corresponding blocks to obtain a matched global fusion video of the scene.
Optionally, the method further comprises: acquiring a light field video, amplifying the light field video by a set multiple for sampling to obtain a sampled light field video, and performing Fourier transform on the sampled light field video to obtain a first video; after the step of transforming the local video onto the corresponding block of the global video to obtain a global fusion video matching the scene, the method further includes: carrying out high-pass filtering on the global fusion video to obtain a second video; performing linear addition on the first video and the second video, and performing Fourier change to obtain a third video; and playing the third video in a third window.
Optionally, the first window and the second window are displayed simultaneously on the same interface.
Optionally, a region corresponding to the local video is visually identified in the global fusion video.
Optionally, the instruction comprises at least one of a voice instruction, a touch instruction, and a gesture instruction.
The embodiment of the present application further provides a broadcast directing apparatus, including:
the first acquisition module is used for acquiring a global video;
the searching module is used for searching a scene matched with the instruction in the global video based on the instruction;
the second acquisition module is used for acquiring a local video matched with the scene based on the scene;
the transformation module is used for transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene; and
and the playing module is used for playing the global fusion video in a first window and playing the local video in a second window.
Optionally, the transformation module is further adapted to: finding out a corresponding block of the local video in the global video by adopting a block matching algorithm; registering the local video and the global video based on the corresponding blocks to obtain a matched global fused video of the scene.
Optionally, the present application also proposes a computer-readable storage medium, on which a computer program is stored, wherein the computer program is configured to implement the steps of the method as described above when executed.
Optionally, the present application also proposes a computer device, including a processor, a memory and a computer program stored on the memory, wherein the processor implements the steps of the method when executing the computer program.
According to the method, a scene matched with an instruction is obtained in a global video through the instruction, and then a local video matched with the scene is obtained in a searching mode; and then transforming the local video to the corresponding block in the global video to obtain a global fusion video matched with the scene. In this case, the processing amount of the local video and the global video is small, and the processing efficiency is improved. Meanwhile, the global fusion video only presents a high-resolution video on the corresponding block, the videos of other areas of the global fusion video still keep low resolution, and the user can capture the position of the local video related to the instruction in the global video in the aspect of visual effect. Meanwhile, the global fusion video is played in the first window, and the local video is played in the second window, so that a user can compare the global fusion video with the local video, and the user can obtain interested information.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a director device according to an embodiment of the present application;
fig. 2 is a flowchart illustrating a director method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a director apparatus according to an embodiment of the present application;
fig. 4 is an internal structural diagram of a computer device provided in an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the technical solutions of the embodiments of the present specification, the drawings used in the description of the embodiments will be briefly described below. It is obvious that the drawings in the following description are only examples or embodiments of the present description, and that for a person skilled in the art, the present description can also be applied to other similar scenarios on the basis of these drawings without inventive effort. Unless otherwise apparent from the context, or otherwise indicated, like reference numbers in the figures refer to the same structure or operation.
It should be understood that "system," "device," "unit," and/or "module" as used herein is a method for distinguishing between different components, elements, parts, portions, or assemblies of different levels. However, other words may be substituted by other expressions if they accomplish the same purpose.
As used in this specification and the appended claims, the terms "a," "an," "the," and/or "the" are not intended to be inclusive in the singular, but rather are intended to be inclusive in the plural, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that steps and elements are included which are explicitly identified, that the steps and elements do not form an exclusive list, and that a method or apparatus may include other steps or elements.
Flow charts are used in this description to illustrate operations performed by a system according to embodiments of the present description. It should be understood that the preceding or following operations are not necessarily performed in the exact order in which they are performed. Rather, the various steps may be processed in reverse order or simultaneously. Meanwhile, other operations may be added to or removed from these processes.
Fig. 1 is a schematic diagram of an application scenario of a director device according to some embodiments of the present application. As shown in fig. 1, director apparatus 100 may include a server 110, a network 120, a group of image capturing devices 130, and a memory 140.
In some embodiments, the server 110 may include a processing device 112. Processing device 112 may process information and/or data related to the human-computer interaction system to perform one or more functions described herein. For example, the processing device 112 may determine an imaging control strategy based on the interaction instructions and/or historical data. In some embodiments, the processing device 112 may include at least one processing unit (e.g., a single core processing engine or a multiple core processing engine). In some embodiments, processing device 112 may be part of image acquisition device suite 130.
The network 120 may provide a conduit for the exchange of information. In some embodiments, network 120 may include one or more network access points. One or more components of the director device 100 may connect to the network 120 through an access point to exchange data and/or information. In some embodiments, at least one component in director device 100 may access data or instructions stored in memory 140 via network 120.
The image capturing device group 130 may be composed of a plurality of image capturing devices, and the types of the image capturing devices are not limited, and may be, for example, a camera, a light field camera, or a mobile terminal having an image capturing function.
In some embodiments, memory 140 may store data and/or instructions that processing device 112 may perform or use to perform the example methods described herein. For example, the memory 140 may store historical data. In some embodiments, memory 140 may be connected directly to server 110 as back-end memory. In some embodiments, memory 140 may be part of server 110, image capture device bank 130.
Fig. 2 shows a flowchart of a director method provided in the embodiment of the application. Referring to fig. 2, the present application further provides a broadcasting method, including the following steps:
acquiring a global video;
acquiring a scene matched with an instruction based on the instruction;
acquiring a local video matched with the scene based on the scene;
transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene;
and playing the global fusion video in a first window, and playing the local video in a second window.
In the embodiment of the application, a scene matched with an instruction is obtained in a global video through the instruction, and then a local video matched with the scene is obtained in a searching mode; and then transforming the local video to the corresponding block in the global video to obtain a global fusion video matched with the scene. In this case, the processing amount of the local video and the global video is small, and the processing efficiency is improved. Meanwhile, the global fusion video only presents a high-resolution video on the corresponding block, the videos of other areas of the global fusion video still keep low resolution, and the user can capture the position of the local video related to the instruction in the global video in the aspect of visual effect. Meanwhile, the global fusion video is played in the first window, and the local video is played in the second window, so that a user can compare the global fusion video with the local video, and the user can obtain interested information.
It should be noted that the global video is obtained by shooting with a global camera. One global video corresponds to a plurality of local videos. And if the global video and the local video are offline videos, marking a plurality of local videos and corresponding scenes, wherein each mark corresponds to a different instruction. If the global video and the local video are live videos, the association relationship between the scene and the local camera can be preset, and each mark corresponds to a different instruction. When the instruction is obtained, the corresponding scene is obtained, and the local video matched with the scene is obtained based on the scene.
In order to improve the video director speed or avoid the data overflow of the body server, in the embodiment of the application, the global video is stored in the local memory, and the local video is stored in the remote memory or the cloud memory. Acquiring a global video; acquiring a scene matched with an instruction based on the instruction; based on the scene, acquiring a local video matched with the scene from a remote memory or a cloud memory; and transforming the local video to the corresponding block of the global video to obtain a global fusion video matched with the scene.
In an embodiment of the present application, the instruction may be at least one of a voice instruction, a touch instruction, and a gesture instruction.
As an optional implementation manner of the foregoing embodiment, the step of transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene specifically includes:
and finding the corresponding block of the local video in the global video by adopting a block matching algorithm. In general, a zero-mean normalized cross-correlation block matching algorithm (abbreviated as "ZNCC algorithm") is used for block matching, preferably, two ZNCC iterations are performed, a corresponding block of the local video in the global video is found, and a pixel matching relationship between the local video and the global reference video is obtained.
Registering the local video and the global video based on the corresponding blocks to obtain a matched global fusion video of the scene. Generally, the local video and the global video are transformed using global transformation, mesh-based transformation, and temporal, spatial smoothing transformation to obtain a matching global fusion video of the scene.
Integral transformation: and taking the searched local video and the corresponding block thereof as a pair, and then extracting and matching the characteristic points of the local video and the corresponding block thereof by adopting a ZNCC algorithm so as to extract the corresponding (matched) characteristic point pairs in the local image and the corresponding block thereof. In a preferred embodiment of the present invention, two ZNCC iterations are performed to extract the pairs of feature points and calculate the homography matrix, and the process of the two iterations can be represented by the following iterative formula:
wherein,representing a matching block between the local video and the corresponding block thereof (representing the matching relationship between the local video and the corresponding block); I.C. A l 、I r Respectively representing a local video and its corresponding block, p, in a global reference video l And p r Are respectively partial video I l And its corresponding block I r Corresponding feature point of, i.e. p l And p r Is a characteristic point pair; ZNCCC () represents an energy function for calculating a local video and a corresponding block thereof by using a ZNCC algorithm; h represents a homography matrix, and is initialized to be an identity matrix during initialization;is a normalized matrix of pl, pi () represents the central projection and the inverse normalization function; w is the size of the partial video (the partial video is a square, w represents the side length of the square), and epsilon is the search width.
A mesh-based transformation is then performed: on the basis of the obtained preliminary global transformation video, the feature point pairs extracted in the whole transformation process are transformed based on grids by using an ASAP transformation frame (nearest transformation frame), and then the result of the grid transformation is transformed based on optical flow so as to optimize the pixel matching relation, obtain more reliable feature point pairs, and obtain more feature points which are successfully matched and changed optical flow in the local video. And combining the distortion of the optical flow transformation with the stability of the local video, recalculating the homography matrix, and completing the transformation based on the grid and the transformation based on the optical flow to obtain a transformation result. And then carrying out color calibration on the local video subjected to transformation and registration.
In a specific embodiment, a time and space smooth transformation is performed by introducing a time stability constraint condition, and the energy function of the smooth transformation is as follows:
E(V)=λ r E r (V)+λ t E t (V)+λ s E s (V)
where V represents a homography matrix of the transformation that depends on the vertices of the mesh, E r (V) is the sum of the distances of each pair of feature points between local videos in the global transformed video and the global reference video, E t (V) is a temporal stability constraint; e s (V) is a spatial smoothing term defined as the spatial deformation between adjacent vertices; lambda [ alpha ] r 、λ t And λ s Are all constants greater than 0; wherein:
p' l is a feature point p in a local video l Corresponding characteristic points in the time prior graph; b is an indication function for checking whether the pixel point pl is on a static background, when B (p) l ) 0 denotes a pixel p l Is positioned on the moving background; s is the global transformation function between the local video and its temporal prior map.
After the series of transformation and registration, a global high-resolution video is obtained, wherein in consideration of the problem of inconsistent color of each local video in the global high-resolution video due to different color illumination of local cameras, each local video can be subjected to color correction until the local video is consistent with a global reference video, so that the global high-resolution video has a uniform color style as a whole. In addition, such optimization can also be done for global high-resolution video: and removing the overlapped part between the local videos on the transformation by adopting a graph cutting method so as to minimize the error of video registration.
As an optional implementation of the above embodiment, the method further comprises: acquiring a light field video, amplifying the light field video by a set multiple for sampling to obtain a sampled light field video, and performing Fourier transform on the sampled light field video to obtain a sampled light field videoA first video; after the step of transforming the local video onto the corresponding block of the global video to obtain a global fusion video matched with the scene, the method further includes: carrying out high-pass filtering on the global fusion video to obtain a second video; performing linear addition on the first video and the second video, and performing Fourier change to obtain a third video; and playing the third video in a third window. After a global high-resolution video is obtained, video super-resolution needs to be performed on the global light field video to solve the problem of low spatial resolution, and the specific method comprises the following steps: the method comprises the steps of performing up-sampling on a global light field video with low resolution (meaning spatial resolution) by a set magnification factor to obtain a sampled low-resolution light field video, performing Fourier transform on the sampled low-resolution light field video to obtain a first frequency spectrum video, and performing low-pass filtering on the first frequency spectrum video; and carrying out high-pass filtering on the global high-resolution video to obtain a second frequency spectrum video. And performing inverse Fourier transform after the first frequency spectrum video subjected to low-pass filtering and the second frequency spectrum video are linearly added to obtain a global high-resolution light field video. Wherein the set multiple is f h /f l ,f h And f l The focal lengths of the local camera and the global camera, respectively.
As an optional implementation manner of the foregoing embodiment, the first window and the second window are displayed simultaneously in the same interface. In order to facilitate the user to capture information of interest in the video, the first window and the second window are presented under the same interface. In general, the second window floats on the first window, and the second window is close to the corresponding position of the local video in the global video, so as to be convenient for the user to observe.
Further, the first window, the second window and the third window are displayed simultaneously on the same interface. In order to facilitate the user to capture information of interest in the video, the first window, the second window and the third window are displayed under the same interface. In general, the second window floats on the first window and the third window, and the second window is close to the corresponding position of the local video in the global video, so that the user can observe the second window conveniently.
As an optional implementation manner of the foregoing embodiment, a region corresponding to the local video is visually identified in the global fusion video. Generally, the visual manner may optionally select a circling representation, an indicating representation, or the like.
As shown in fig. 3, an embodiment of the present application further provides a director device, including:
a first obtaining module 100, configured to obtain a global video;
the searching module 200 searches a scene matched with the instruction based on the instruction;
a second obtaining module 300, configured to obtain, based on the scene, a local video matching the scene;
a transformation module 400, configured to transform the local video to a corresponding block of the global video, so as to obtain a global fusion video matched with the scene; and
the playing module 500 is configured to play the global fusion video in a first window, and play the local video in a second window.
The transformation module 400 is further adapted to: finding out a corresponding block of the local video in the global video by adopting a block matching algorithm; registering the local video and the global video based on the corresponding blocks to obtain a matched global fusion video of the scene.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the modules/units/sub-units/components in the above-described apparatus may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In some embodiments, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 4. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The database of the computer device is used for storing relevant data of the image acquisition device. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a method and system for directing a program.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 4. The computer device includes a processor, a memory, a communication interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operating system and the computer program to run on the non-volatile storage medium. The communication interface of the computer device is used for carrying out wired or wireless communication with an external terminal, and the wireless communication can be realized through WIFI, an operator network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement a director method and system. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 4 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is further provided a computer device comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the above method embodiments when executing the computer program.
In some embodiments, a computer-readable storage medium is provided, in which a computer program is stored, which computer program, when being executed by a processor, carries out the steps of the above-mentioned method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database or other medium used in the embodiments provided herein can include at least one of non-volatile and volatile memory. Non-volatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, or the like. Volatile Memory can include Random Access Memory (RAM) or external cache Memory. By way of illustration and not limitation, RA M may take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), among others.
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.
To sum up, the present application further provides a broadcast directing method, including:
acquiring a global video;
based on an instruction, searching a scene matched with the instruction in the global video;
acquiring a local video matched with the scene based on the scene;
transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene;
and playing the global fusion video in a first window, and playing the local video in a second window.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed coupling or direct coupling or communication connection between each other may be through some communication interfaces, indirect coupling or communication connection between devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments provided in the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus once an item is defined in one figure, it need not be further defined and explained in subsequent figures, and moreover, the terms "first", "second", "third", etc. are used merely to distinguish one description from another and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the present disclosure, which should be construed in light of the above teachings. Are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (10)
1. A method for directing broadcasting is characterized by comprising the following steps:
acquiring a global video;
based on an instruction, searching a scene matched with the instruction;
acquiring a local video matched with the scene based on the scene;
transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene;
and playing the global fusion video in a first window, and playing the local video in a second window.
2. The method according to claim 1, wherein the step of transforming the local video onto the corresponding block of the global video to obtain a global fusion video matching the scene specifically comprises:
finding out a corresponding block of the local video in the global video by adopting a block matching algorithm;
registering the local video and the global video based on the corresponding blocks to obtain a matched global fusion video of the scene.
3. The method of claim 1, wherein the method further comprises:
a light-field video is acquired and,
amplifying the light field video by a set multiple for sampling to obtain a sampled light field video, and performing Fourier transform on the sampled light field video to obtain a first video;
after the step of transforming the local video onto the corresponding block of the global video to obtain a global fusion video matching the scene, the method further includes:
carrying out high-pass filtering on the global fusion video to obtain a second video;
performing linear addition on the first video and the second video, and performing Fourier change to obtain a third video;
and playing the third video in a third window.
4. The method of claim 1, wherein the first window and the second window are presented simultaneously under the same interface.
5. The method of claim 1, wherein regions corresponding to the local videos are visually identified in the global fusion video.
6. The method of claim 1, wherein the instruction comprises at least one of a voice instruction, a touch instruction, and a gesture instruction.
7. A director device, comprising:
the first acquisition module is used for acquiring a global video;
the searching module is used for searching a scene matched with the instruction based on the instruction;
the second acquisition module is used for acquiring a local video matched with the scene based on the scene;
the transformation module is used for transforming the local video to a corresponding block of the global video to obtain a global fusion video matched with the scene; and
and the playing module is used for playing the global fusion video in a first window and playing the local video in a second window.
8. The apparatus of claim 7, wherein the transformation module is further adapted to: finding out a corresponding block of the local video in the global video by adopting a block matching algorithm;
registering the local video and the global video based on the corresponding blocks to obtain a matched global fused video of the scene.
9. A computer-readable storage medium on which a computer program is stored, wherein the computer program, when executed, performs the steps of the method according to any one of claims 1 to 6.
10. A computer arrangement comprising a processor, a memory and a computer program stored on the memory, characterized in that the steps of the method according to any of claims 1-6 are implemented when the computer program is executed by the processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210826557.5A CN115103125B (en) | 2022-07-13 | 2022-07-13 | Guide broadcasting method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210826557.5A CN115103125B (en) | 2022-07-13 | 2022-07-13 | Guide broadcasting method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115103125A true CN115103125A (en) | 2022-09-23 |
CN115103125B CN115103125B (en) | 2023-05-12 |
Family
ID=83297324
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210826557.5A Active CN115103125B (en) | 2022-07-13 | 2022-07-13 | Guide broadcasting method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115103125B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024169174A1 (en) * | 2023-02-14 | 2024-08-22 | 华为技术有限公司 | External camera directing method and video conference terminal |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090080695A1 (en) * | 2007-09-24 | 2009-03-26 | New Span Opto-Technology, Inc. | Electro-optical Foveated Imaging and Tracking System |
CN107959805A (en) * | 2017-12-04 | 2018-04-24 | 深圳市未来媒体技术研究院 | Light field video imaging system and method for processing video frequency based on Hybrid camera array |
CN110086994A (en) * | 2019-05-14 | 2019-08-02 | 宁夏融媒科技有限公司 | A kind of integrated system of the panorama light field based on camera array |
CN110781350A (en) * | 2019-09-26 | 2020-02-11 | 武汉大学 | Pedestrian retrieval method and system oriented to full-picture monitoring scene |
CN112367474A (en) * | 2021-01-13 | 2021-02-12 | 清华大学 | Self-adaptive light field imaging method, device and equipment |
-
2022
- 2022-07-13 CN CN202210826557.5A patent/CN115103125B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20090080695A1 (en) * | 2007-09-24 | 2009-03-26 | New Span Opto-Technology, Inc. | Electro-optical Foveated Imaging and Tracking System |
CN107959805A (en) * | 2017-12-04 | 2018-04-24 | 深圳市未来媒体技术研究院 | Light field video imaging system and method for processing video frequency based on Hybrid camera array |
CN110086994A (en) * | 2019-05-14 | 2019-08-02 | 宁夏融媒科技有限公司 | A kind of integrated system of the panorama light field based on camera array |
CN110781350A (en) * | 2019-09-26 | 2020-02-11 | 武汉大学 | Pedestrian retrieval method and system oriented to full-picture monitoring scene |
CN112367474A (en) * | 2021-01-13 | 2021-02-12 | 清华大学 | Self-adaptive light field imaging method, device and equipment |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024169174A1 (en) * | 2023-02-14 | 2024-08-22 | 华为技术有限公司 | External camera directing method and video conference terminal |
Also Published As
Publication number | Publication date |
---|---|
CN115103125B (en) | 2023-05-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Chen et al. | Camera lens super-resolution | |
US10887519B2 (en) | Method, system and apparatus for stabilising frames of a captured video sequence | |
CN110827200B (en) | Image super-resolution reconstruction method, image super-resolution reconstruction device and mobile terminal | |
CN107959805B (en) | Light field video imaging system and method for processing video frequency based on Hybrid camera array | |
Yu et al. | Towards efficient and scale-robust ultra-high-definition image demoiréing | |
CN101504761B (en) | Image splicing method and apparatus | |
CN101540046B (en) | Panoramagram montage method and device based on image characteristics | |
RU2706891C1 (en) | Method of generating a common loss function for training a convolutional neural network for converting an image into an image with drawn parts and a system for converting an image into an image with drawn parts | |
CN112367459B (en) | Image processing method, electronic device, and non-volatile computer-readable storage medium | |
US10121262B2 (en) | Method, system and apparatus for determining alignment data | |
JP7264310B2 (en) | Image processing method, apparatus, non-transitory computer readable medium | |
Pang et al. | FAN: Frequency aggregation network for real image super-resolution | |
CN110490796B (en) | High-low frequency component fused face super-resolution processing method and system | |
Zhang et al. | Self-supervised learning for real-world super-resolution from dual zoomed observations | |
CN116091322B (en) | Super-resolution image reconstruction method and computer equipment | |
CN114418853A (en) | Image super-resolution optimization method, medium and device based on similar image retrieval | |
CN115103125B (en) | Guide broadcasting method and device | |
CN106558021A (en) | Video enhancement method based on super-resolution technique | |
Chen et al. | A novel face super resolution approach for noisy images using contour feature and standard deviation prior | |
CN112150384A (en) | Method and system based on fusion of residual error network and dynamic convolution network model | |
CN115222776B (en) | Matching auxiliary visual target tracking method and device, electronic equipment and storage medium | |
Xia et al. | A coarse-to-fine ghost removal scheme for HDR imaging | |
Jiang et al. | Antialiased super-resolution with parallel high-frequency synthesis | |
Jiang et al. | Low-resolution and low-quality face super-resolution in monitoring scene via support-driven sparse coding | |
CN113822899A (en) | Image processing method, image processing device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |