CN112037264B - Video fusion system and method - Google Patents

Video fusion system and method Download PDF

Info

Publication number
CN112037264B
CN112037264B CN202011206384.4A CN202011206384A CN112037264B CN 112037264 B CN112037264 B CN 112037264B CN 202011206384 A CN202011206384 A CN 202011206384A CN 112037264 B CN112037264 B CN 112037264B
Authority
CN
China
Prior art keywords
information
video
scene
fusion
moving object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011206384.4A
Other languages
Chinese (zh)
Other versions
CN112037264A (en
Inventor
叶德建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Qinghe Technology Co ltd
Zhejiang Qinghe Technology Co ltd
Original Assignee
Shanghai Qinghe Technology Co ltd
Zhejiang Qinghe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Qinghe Technology Co ltd, Zhejiang Qinghe Technology Co ltd filed Critical Shanghai Qinghe Technology Co ltd
Priority to CN202011206384.4A priority Critical patent/CN112037264B/en
Publication of CN112037264A publication Critical patent/CN112037264A/en
Application granted granted Critical
Publication of CN112037264B publication Critical patent/CN112037264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30242Counting objects in image

Abstract

The application relates to a video fusion system and a video fusion method. The video fusion system includes: the video acquisition module is used for acquiring scene video information; the moving object data statistical system is used for acquiring moving object information in the scene video information according to the scene video information; the data generation module is used for generating scene data according to the scene video information and/or the moving object information; and the fusion calculation module is used for performing fusion calculation on the scene video information and the scene data so as to generate fusion information, wherein the fusion information comprises at least one scene data and one scene video information. By adopting the video fusion system, the cost can be saved, the fault points can be reduced, and the overall reliability and the expandability are obviously improved.

Description

Video fusion system and method
Technical Field
The application relates to the technical field of cloud computing, in particular to a video fusion system and a video fusion method.
Background
In the prior art, all videos and information of a scene cannot be placed together, for example, in a scene of a scenic spot, a plurality of cameras, broadcasting equipment, alarm equipment and the like are distributed, however, the cameras, the broadcasting equipment, the alarm equipment and the like all work independently, the whole process is contacted manually, manpower is wasted, and the delay performance is quite strong.
Disclosure of Invention
It is an object of the present application to provide a video fusion system that overcomes or at least mitigates at least one of the above-mentioned disadvantages of the prior art.
The present application first provides a video fusion system, which includes:
the video acquisition module is used for acquiring scene video information, and the scene video information is acquired through one or more of a one-way video application system, a semi-two-way video application system and a two-way video application system;
the moving object data statistical system is used for acquiring moving object information in scene video information according to the scene video information;
the data generation module is used for generating scene data according to the scene video information and/or the moving object information;
and the fusion calculation module is used for performing fusion calculation on the scene video information and the scene data so as to generate fusion information, and the fusion information comprises at least one scene data and one scene video information.
Optionally, the video fusion system further includes:
the service usage detection module is used for detecting interaction information of the interaction device;
and the service adjusting module is used for adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device.
Optionally, the video fusion system further includes:
the aerial photography information acquisition module is used for acquiring aerial photography information;
and the 3D view generation module is used for generating a scene 3D view according to the aerial photographing information.
Optionally, the video fusion system further includes:
and the 3D fusion module is used for fusing the scene video information and/or the moving object information with the scene 3D view so as to display all information or part of information in the scene video information and/or the moving object information in the scene 3D view.
Optionally, the video fusion system further includes:
the video storage module is used for storing scene video information acquired by one or more of a unidirectional video application system, a semi-bidirectional video application system and a bidirectional video application system;
and the calling module is used for calling the scene video information stored by the video storage module.
Optionally, the moving object data statistics system comprises:
the characteristic identification module is used for identifying the characteristics of the moving objects in the video information of each scene;
a statistics module to obtain one or more of quantity information of the moving objects, position information of each moving object, and feature information of each moving object.
Optionally, the moving object data statistics system further comprises:
the system comprises a specific moving object presetting module, a data processing module and a data processing module, wherein the specific moving object presetting module is used for generating characteristic information of a specific moving object;
and the feature locking tracking module is used for acquiring the moving object which accords with the feature information of the specific moving object in each scene video information.
The application also provides a video fusion method, which comprises the following steps:
scene video information is obtained, and the scene video information is obtained through one or more of a one-way video application system, a semi-two-way video application system and a two-way video application system;
acquiring moving object information in scene video information according to the scene video information;
generating scene data according to the scene video information and/or the moving object information;
and performing fusion calculation on the scene video information and the scene data to generate fusion information, wherein the fusion information comprises at least one scene data and one scene video information.
Optionally, the video fusion method further includes:
detecting interaction information of an interaction device;
and adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device.
Optionally, the video fusion method further includes:
acquiring aerial photographing information;
and generating a scene 3D view according to the aerial photographing information.
The video fusion system can acquire scene video information from a one-way video application system, a semi-two-way video application system and a two-way video application system and acquire moving object information through a moving object data counting system, generates scene data according to the scene video information and the moving object information, fuses the scene data and the scene video information through a fusion calculation module, generates fusion information, manages multiple scenes and one image, can save cost, can reduce fault points, and obviously improves overall reliability and expandability.
Drawings
Fig. 1 is a system diagram of a video fusion system.
Fig. 2 is a schematic diagram of an application in an embodiment of the present application.
Reference numerals:
1. a video acquisition module; 2. a moving object data statistics system; 3. a data generation module; 4. a fusion calculation module; 5. a one-way video application system; 6. a semi-bidirectional video application system; 7. a two-way video application system.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 is a system diagram of a video fusion system.
The video fusion system shown in fig. 1 includes a video acquisition module 1, a moving object data statistics system 2, a data generation module 3, and a fusion calculation module 4, where the video acquisition module 1 is configured to acquire scene video information, and the scene video information is acquired through one or more of a unidirectional video application system 5, a semi-bidirectional video application system 6, and a bidirectional video application system 7; the moving object data statistical system 2 is used for acquiring moving object information in the scene video information according to the scene video information; the data generation module 3 is used for generating scene data according to the video information of each scene and/or the information of the moving object; the fusion calculation module 4 is configured to perform fusion calculation on the scene video information and the scene data, so as to generate fusion information, where the fusion information includes at least one scene data and one scene video information.
In the present embodiment, the scene video information is information that can be acquired by the unidirectional video application system 5, the semi-bidirectional video application system 6, and the bidirectional video application system 7, and the scene video information is, for example, a scene image, a scene video, and the like. The unidirectional video application system 5 in the invention refers to unidirectional video transmission application, such as video information release; the semi-bidirectional video application system 6 refers to video transmission applications between unidirectional and bidirectional, such as IPTV; the bi-directional video application system 7 refers to real-time bi-directional video transmission applications such as video conferencing.
In this embodiment, the moving object data statistical system is configured to acquire moving objects, such as people and animals, in the images or in the videos by feature capture or the like according to the scene images and the scene videos.
It is understood that the active object statistics system may obtain the number of active objects by means of feature capture, for example, the number of active objects in a time period, and for example, the active object statistics system may also obtain the features of active objects in a certain frame or frames, for example, a face feature, a clothing feature, a height feature, and the like.
It will be appreciated that such feature recognition may be by image recognition techniques.
For example, the fusion calculation module may fuse a picture or a video with the scene data, for example, the scene video information acquired by the video acquisition module is fused with the moving object information acquired by the moving object data statistics system, so as to generate a scene picture, where the scene picture has the moving object information.
In this embodiment, the fusion requires acquiring scene data, and the scene data is generated by the data generation module. The scene data generation mode may be: and extracting features of the picture, representing the picture in a matrix form, storing the picture in a data form, and storing the moving object information as data.
In this embodiment, the fusion calculation module may perform fusion in a manner that JAVAWEB does not perform data reading (reading scene data), VOE does a front-end technology, and data rendering is performed in combination with a 4G view, thereby forming fusion information.
The video fusion system can acquire scene video information from a one-way video application system, a semi-two-way video application system and a two-way video application system and acquire moving object information through a moving object data counting system, generates scene data according to the scene video information and the moving object information, fuses the scene data and the scene video information through a fusion calculation module, generates fusion information, manages multiple scenes and one image, can save cost, can reduce fault points, and obviously improves overall reliability and expandability.
In this embodiment, the video fusion system further includes an aerial photography information obtaining module and a 3D view generating module, where the aerial photography information obtaining module is configured to obtain aerial photography information; the 3D view generation module is used for generating a scene 3D view according to the aerial photographing information.
In this embodiment, the video fusion system further includes a 3D fusion module, where the 3D fusion module is configured to fuse the scene video information and/or the moving object information with the scene 3D view, so as to display all or part of information in the scene video information and/or the moving object information in the scene 3D view.
In this embodiment, the application further includes a visualization module, where the visualization module is configured to perform visualization conversion on all or part of the scene video information and/or the moving object information, so as to form a table or a graph.
Referring to fig. 2, fig. 2 is a diagram illustrating an embodiment of a video fusion system to which the present application is applied, where the background of the embodiment is specifically: in a certain scenic spot, through the technology of the present application, a 3D view of a scene is obtained, specifically, aerial photography information, such as the scene view of fig. 2, is obtained through an aerial photography information obtaining module, and the 3D view of the scene is generated through a 3D view generating module.
In this embodiment, scene video information and/or moving object information is merged into a 3D view of a scene, for example, on the lower side of fig. 2 (with the direction of the drawing under a normal viewing angle as a reference), there are a plurality of selectable small views, and the small views are acquired by a video acquisition module, for example, the video acquisition module acquires (for example, a scenic spot camera) through one or more of a unidirectional video application system, a semi-bidirectional video application system, and a bidirectional video application system, so as to acquire scene video information.
For another example, in the upper left of fig. 2, there is passenger flow data for nearly 30 days, and the data is obtained by a moving object data statistical system, for example, by feature capture, we can capture moving objects from each video and distinguish the number of the moving objects, thereby obtaining the number information of the moving objects in a certain time as the moving object information.
The data generation module of the present application is configured to generate the information into data, for example, process an image, store the data in a database in the form of data, store moving object information in the database, where the information stored in the database is scene data, and when the information needs to be used, call the scene data that needs to be used to generate an expression form such as a picture or a table, and fuse the expression form into a 3D view of the scene. And the scene video information and/or the moving object information are expressed in some expression form, for example, a character, a table, an embedded picture in a 3D picture, and the like.
For example, the top left portion of FIG. 2 shows approximately 30 days of passenger flow data, which may be captured by the active object statistics system and displayed graphically.
For another example, the lower side of fig. 2 has a scene map of each scene-dividing point, and the scene map can be acquired by a video acquisition module and embedded into a 3D picture.
As another example, there is today's traffic data at the top right of FIG. 2, which may be captured by a moving object statistics system and displayed in graphical form. In this embodiment, the video fusion system further includes a video storage module and a retrieval module, where the video storage module is configured to store scene video information acquired through one or more of a unidirectional video application system, a semi-bidirectional video application system, and a bidirectional video application system; the calling module is used for calling the scene video information stored by the video storage module.
Through calling and storing, the user can call the data according to the requirement.
In this embodiment, the moving object data statistics system includes a feature identification module and a statistics module, where the feature identification module is configured to identify features of moving objects in each scene video information; the statistical module is used for acquiring one or more of quantity information of the movable objects, position information of each movable object and feature information of each movable object.
In this embodiment, the active object data statistics system further includes a specific active object presetting module and a feature locking tracking module, where the specific active object presetting module is configured to generate feature information of a specific active object; the feature locking tracking module is used for acquiring the moving object which accords with the feature information of the specific moving object in each scene video information.
In this way, tracking can be performed. For example, in a tourist attraction, the scene video information is acquired through various cameras, when a child walks away from a parent, the specific active object presetting module is used for generating feature information of a specific active object, namely the physical feature and the dressing feature of the lost child, and then the active object which meets the feature information of the specific active object is acquired from the scene video information, so that the lost child can be searched.
In this embodiment, the video fusion system further includes a service usage detection module and a service adjustment module, where the service usage detection module is used to detect the interaction information of the interaction device; the service adjusting module is used for adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device.
In this embodiment, the interaction device may be a vending machine, a security inspection device, a ticket taker, a ticket checker, etc. in a scene.
By adopting the method, some interaction devices which are not needed in the scene can be removed or hidden in a self-adjusting mode, so that resources are saved, and the customer experience is optimized.
In one embodiment, the step of detecting the interaction information of the interaction device by the service usage detection module specifically includes: the number of times that the user interacts with the user in a period (for example, a day, a month or a year) according to the corresponding interaction device is obtained.
In this embodiment, the adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device by the service adjusting module includes: setting a threshold value of the number of times of use of the service information in one period; and judging whether the number of times of interaction exceeds a threshold value, and if not, adjusting the interaction device. It is understood that if so, no adjustment is made.
It will be appreciated that the adjustment may be by way of removing the service, or moving the device to a more populated area, etc.
In an alternative embodiment, the interaction information of the interaction device includes data for acquiring interaction information of a user with the interaction device in a period, wherein the interaction information can select which interaction mode is selected.
In this alternative embodiment, the adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device by the service adjusting module includes: respectively setting a use frequency threshold value for each selectable interaction mode according to the selectable interaction modes of the service; and respectively judging whether the selectable interaction mode adopted by the user exceeds the corresponding use time threshold value in one period, and if not, adjusting the service. It is understood that if so, no adjustment is made. It will be appreciated that the adjustment may be by way of removing the service, or moving the device to more places in the stream, or removing the selectable interactive mode.
In this embodiment, the interactive device is a video-on-demand device; the interactive mode can be selected as a charging mode.
The present application is further elaborated below by way of example. It will be understood that this example does not constitute any limitation to the present application.
Taking a certain hotel as an example, the hotel is a five-star hotel in Shenzhen downtown, and a client who enters the hotel has a demand for paying to watch the movies provided by the hotel (the interactive device is a network television, and the interactive information is three payment modes for paying to watch the movies). There are three payment methods: payment of treasures, WeChat payment, hotel room account.
The video fusion system of the application acquires the interaction information of the interaction device through the service use detection module, namely, it is assumed that the user pays in any way within one year, for example, the total payment is 100 times, wherein 20 times are payment treasures, 10 times are WeChat payment, and 70 times are hotel accounts.
And adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device.
Specifically, according to the selectable interaction modes, setting a use frequency threshold value for each selectable interaction mode;
and respectively judging whether the selectable interaction mode adopted by the user exceeds the corresponding use time threshold value in one period, and if not, adjusting the service.
Specifically, the number threshold is set for three modes, i.e., precious payment, wechat payment and hotel room account payment, respectively, for example, all the modes are set to 10 times, and then the usage modes are compared with the usage mode of the user in one period, if one mode does not meet the threshold requirement, the service is adjusted, for example, the selectable interactive mode of the service is modified. For example, when the threshold of 10 times is not reached, the optional use mode of the service is modified, and the WeChat payment service is directly deleted or hidden.
The application also provides a video fusion method, which comprises the following steps:
step 1: scene video information is obtained, and the scene video information is obtained through one or more of a one-way video application system, a semi-two-way video application system and a two-way video application system;
step 2: acquiring moving object information in scene video information according to the scene video information;
and step 3: generating scene data according to the video information and/or the moving object information of each scene;
and 4, step 4: and performing fusion calculation on the scene video information and the scene data to generate fusion information, wherein the fusion information comprises at least one scene data and one scene video information.
In this embodiment, the video fusion method further includes:
detecting interaction information of an interaction device;
and adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device.
In this embodiment, the video fusion method further includes:
acquiring aerial photographing information;
and generating a scene 3D view according to the aerial photographing information.
It will be appreciated that the above description of the system applies equally to the description of the method.
The application also provides an electronic device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor executes the computer program to realize the video fusion method.
The electronic device comprises an input device, an input interface, a central processing unit, a memory, an output interface and an output device. The input interface, the central processing unit, the memory and the output interface are mutually connected through a bus, and the input equipment and the output equipment are respectively connected with the bus through the input interface and the output interface and further connected with other components of the electronic equipment. Specifically, the input device receives input information from the outside and transmits the input information to the central processing unit through the input interface; the central processing unit processes the input information based on the computer executable instructions stored in the memory to generate output information, temporarily or permanently stores the output information in the memory, and then transmits the output information to the output device through the output interface; the output device outputs the output information to the outside of the electronic device for use by the user.
That is, the electronic device may also be implemented to include: a memory storing computer-executable instructions; and one or more processors that when executing computer-executable instructions may implement the combined video fusion method.
In one embodiment, an electronic device may be implemented to include: a memory configured to store executable program code; one or more processors configured to execute executable program code stored in the memory to perform the video fusion method in the above embodiments.
The present application also provides a computer-readable storage medium storing a computer program which, when executed by a processor, is capable of implementing a video fusion method as described above.
The computing device includes a Central Processing Unit (CPU) that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) or a program loaded from a storage section into a Random Access Memory (RAM). In the RAM, various programs and data necessary for the operation of the apparatus are also stored. The CPU, ROM, and RAM are connected to each other via a bus. An input/output (I/O) interface is also connected to the bus.
The following components are connected to the I/O interface: an input section including a keyboard, a mouse, and the like; an output section including a display such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section including a hard disk and the like; and a communication section including a network interface card such as a LAN card, a modem, or the like. The communication section performs communication processing via a network such as the internet. The drive is also connected to the I/O interface as needed. A removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive as necessary, so that a computer program read out therefrom is mounted into the storage section as necessary.
In particular, according to embodiments of the present application, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section, and/or installed from a removable medium. The computer program, when executed by a Central Processing Unit (CPU), performs the above-described functions defined in the method of the present application. It should be noted that the computer storage media of the present application can be computer readable signal media or computer readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present application may be implemented by software or hardware. The modules or units described may also be provided in a processor, the names of which in some cases do not constitute a limitation of the module or unit itself.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (7)

1. A video fusion system, the video fusion system comprising:
the video acquisition module (1) is used for acquiring scene video information, and the scene video information is acquired through one or more of a unidirectional video application system (5), a semi-bidirectional video application system (6) and a bidirectional video application system (7);
the moving object data statistical system (2) is used for acquiring moving object information in scene video information according to the scene video information; wherein the moving object data statistics system (2) comprises:
the characteristic identification module is used for identifying the characteristics of the moving objects in the video information of each scene;
a statistics module for acquiring a plurality of pieces of information on the number of moving objects, information on the position of each moving object, and information on the characteristics of each moving object;
the data generation module (3) is used for generating scene data according to the scene video information and/or the moving object information;
a fusion calculation module (4), wherein the fusion calculation module (4) is configured to perform fusion calculation on the scene video information and the scene data, so as to generate fusion information, and the fusion information includes at least one scene data and one scene video information;
the service usage detection module is used for detecting interaction information of the interaction device; the interactive information of the interactive device comprises the interactive information of a user to the interactive device in a period;
the service adjusting module is used for adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device;
the service adjusting module is used for adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device and comprises: respectively setting a use frequency threshold value for each selectable interaction mode according to the selectable interaction modes of the service; respectively judging whether the selectable interaction mode adopted by the user exceeds the corresponding use time threshold value in one period, if not, adjusting the service;
the interactive device is a video on demand device;
the selectable interaction mode is a charging mode;
the adjusting the service comprises removing the service or removing the selectable interaction mode.
2. The video fusion system of claim 1, wherein the video fusion system further comprises:
the aerial photography information acquisition module is used for acquiring aerial photography information;
and the 3D view generation module is used for generating a scene 3D view according to the aerial photographing information.
3. The video fusion system of claim 2, further comprising:
and the 3D fusion module is used for fusing the scene video information and/or the moving object information with the scene 3D view so as to display all information or part of information in the scene video information and/or the moving object information in the scene 3D view.
4. The video fusion system of claim 3, further comprising:
the video storage module is used for storing scene video information acquired by one or more of a unidirectional video application system, a semi-bidirectional video application system and a bidirectional video application system;
and the calling module is used for calling the scene video information stored by the video storage module.
5. The video fusion system according to claim 4, wherein the active object data statistics system (2) further comprises:
the system comprises a specific moving object presetting module, a data processing module and a data processing module, wherein the specific moving object presetting module is used for generating characteristic information of a specific moving object;
and the feature locking tracking module is used for acquiring the moving object which accords with the feature information of the specific moving object in each scene video information.
6. A video fusion method, characterized in that the video fusion method comprises:
scene video information is obtained, and the scene video information is obtained through one or more of a one-way video application system, a semi-two-way video application system and a two-way video application system;
obtaining moving object information in scene video information according to the scene video information, wherein the moving object information comprises: characteristics of moving objects in each scene video information; and a plurality of the number information of the moving objects, the position information of each moving object, and the feature information of each moving object;
generating scene data according to the scene video information and/or the moving object information;
performing fusion calculation on the scene video information and the scene data to generate fusion information, wherein the fusion information comprises at least one scene data and one scene video information; the video fusion method further comprises the following steps:
detecting interaction information of an interaction device;
adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device;
the interactive information comprises interactive information of a user on the interactive device in a period;
the adjusting the interactive service content provided by the corresponding interactive device according to the interactive information of the interactive device comprises: respectively setting a use frequency threshold value for each selectable interaction mode according to the selectable interaction modes of the service; respectively judging whether the selectable interaction mode adopted by the user exceeds the corresponding use time threshold value in one period, if not, adjusting the service;
the interactive device is a video on demand device;
the selectable interaction mode is a charging mode;
the adjusting the service comprises removing the service or removing the selectable interaction mode.
7. The video fusion method of claim 6, further comprising:
acquiring aerial photographing information;
and generating a scene 3D view according to the aerial photographing information.
CN202011206384.4A 2020-11-03 2020-11-03 Video fusion system and method Active CN112037264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011206384.4A CN112037264B (en) 2020-11-03 2020-11-03 Video fusion system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011206384.4A CN112037264B (en) 2020-11-03 2020-11-03 Video fusion system and method

Publications (2)

Publication Number Publication Date
CN112037264A CN112037264A (en) 2020-12-04
CN112037264B true CN112037264B (en) 2021-02-05

Family

ID=73573559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011206384.4A Active CN112037264B (en) 2020-11-03 2020-11-03 Video fusion system and method

Country Status (1)

Country Link
CN (1) CN112037264B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110249095A1 (en) * 2010-04-12 2011-10-13 Electronics And Telecommunications Research Institute Image composition apparatus and method thereof
CN106303741A (en) * 2016-08-25 2017-01-04 武克易 The advertisement play system of feature based data search
CN109840951A (en) * 2018-12-28 2019-06-04 北京信息科技大学 The method and device of augmented reality is carried out for plane map
CN110310306A (en) * 2019-05-14 2019-10-08 广东康云科技有限公司 Method for tracking target, system and medium based on outdoor scene modeling and intelligent recognition
CN111007936A (en) * 2018-10-08 2020-04-14 阿里巴巴集团控股有限公司 Terminal equipment in physical store and information processing method, device and system thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9965779B2 (en) * 2015-02-24 2018-05-08 Google Llc Dynamic content display time adjustment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110249095A1 (en) * 2010-04-12 2011-10-13 Electronics And Telecommunications Research Institute Image composition apparatus and method thereof
CN106303741A (en) * 2016-08-25 2017-01-04 武克易 The advertisement play system of feature based data search
CN111007936A (en) * 2018-10-08 2020-04-14 阿里巴巴集团控股有限公司 Terminal equipment in physical store and information processing method, device and system thereof
CN109840951A (en) * 2018-12-28 2019-06-04 北京信息科技大学 The method and device of augmented reality is carried out for plane map
CN110310306A (en) * 2019-05-14 2019-10-08 广东康云科技有限公司 Method for tracking target, system and medium based on outdoor scene modeling and intelligent recognition

Also Published As

Publication number Publication date
CN112037264A (en) 2020-12-04

Similar Documents

Publication Publication Date Title
CN108427911B (en) Identity verification method, system, device and equipment
CN107872732B (en) Self-service interactive video live broadcast system
TWI765304B (en) Image reconstruction method and image reconstruction device, electronic device and computer-readable storage medium
CN111669612B (en) Live broadcast-based information delivery method and device and computer-readable storage medium
US20150222815A1 (en) Aligning videos representing different viewpoints
CN112954450B (en) Video processing method and device, electronic equipment and storage medium
CN110136091B (en) Image processing method and related product
CN113569825B (en) Video monitoring method and device, electronic equipment and computer readable medium
CN112770042B (en) Image processing method and device, computer readable medium, wireless communication terminal
WO2021143228A1 (en) Data pushing method and apparatus, electronic device, computer storage medium and computer program
CN111385484B (en) Information processing method and device
CN112182299A (en) Method, device, equipment and medium for acquiring highlight segments in video
CN115379125B (en) Interactive information sending method, device, server and medium
CN115761090A (en) Special effect rendering method, device, equipment, computer readable storage medium and product
CN112037264B (en) Video fusion system and method
CN114565952A (en) Pedestrian trajectory generation method, device, equipment and storage medium
CN111385460A (en) Image processing method and device
Duan et al. Flad: a human-centered video content flaw detection system for meeting recordings
CN112312207B (en) Method, device and equipment for getting through traffic between smart television terminal and mobile terminal
CN115222969A (en) Identification information identification method, device, equipment, readable storage medium and product
CN110809166B (en) Video data processing method and device and electronic equipment
CN110321857B (en) Accurate passenger group analysis method based on edge calculation technology
CN114143429A (en) Image shooting method, image shooting device, electronic equipment and computer readable storage medium
CN113762156B (en) Video data processing method, device and storage medium
CN112818914B (en) Video content classification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20201204

Assignee: Beijing Qinghe Technology Co.,Ltd.

Assignor: Zhejiang Qinghe Technology Co.,Ltd.|SHANGHAI QINGHE TECHNOLOGY CO.,LTD.

Contract record no.: X2022980017469

Denomination of invention: A Video Fusion System and Method

Granted publication date: 20210205

License type: Common License

Record date: 20221010

EE01 Entry into force of recordation of patent licensing contract