CN106210450B - A kind of multichannel multi-angle of view big data video clipping method - Google Patents

A kind of multichannel multi-angle of view big data video clipping method Download PDF

Info

Publication number
CN106210450B
CN106210450B CN201610571146.0A CN201610571146A CN106210450B CN 106210450 B CN106210450 B CN 106210450B CN 201610571146 A CN201610571146 A CN 201610571146A CN 106210450 B CN106210450 B CN 106210450B
Authority
CN
China
Prior art keywords
video
networking
video source
wireless self
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201610571146.0A
Other languages
Chinese (zh)
Other versions
CN106210450A (en
Inventor
罗轶
王开宇
李宗祺
周伟
周旬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201610571146.0A priority Critical patent/CN106210450B/en
Publication of CN106210450A publication Critical patent/CN106210450A/en
Application granted granted Critical
Publication of CN106210450B publication Critical patent/CN106210450B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a kind of multichannel multi-angle of view big data video clipping methods.It can be interacted from the big data video information of socialization and capture relevant to itself video information, and generate by computer automation the movie and video programs with user for leading role by inference machine and knowledge base.

Description

A kind of multichannel multi-angle of view big data video clipping method
Technical field
The present invention relates to a kind of video editing methods of mobile augmented reality, especially a kind of based on SLAM and wirelessly from group Big data video fusion editing of the net to multichannel, multi-angle of view, generates shadow by computer automation by inference machine and knowledge base Depending on the image expert system of program.
Background technique
People are keen to self-timer, but take a picture that easy, shoot the video relative difficulty certainly certainly.
We but divert one's attention in shooting in face of generating the excellent of shooting impulsion, have ignored it is excellent itself.
The massive video that we take a lot of trouble shooting is often sunk into sleep hard disk, then is not organized, is looked back.
Go with activity when, everybody often mutually shooting.But we are difficult to be obtained from from other people camera-shooting and recording device in real time Oneself video, it is also difficult in real time from the same activity of other people views.
Monitoring camera is universal to generate massive video information, also promotes the video inspection that useful information is searched for from big data video Rope technology graduallys mature and starts to move towards market.
The basis of traditional video display creation is literature play writing, and director converts film for text image according to literature drama Video and audio language completes production early period from writing out shooting script to departments such as leader's photography, art designing, recording, performers and working in concert Artistic creation, be the inventing again during film creation.Editing is then three degree of creation in entire film production process, by The vision material of prophase shoot is decomposed again, combined, edited and constituted a complete movie with sound material by editor.
Montage (Montage) was building technics originally, meant composition, assembly, and the space-time that may be interpreted as meaning is artificial Piece editing gimmick together in ground.For traditional films and television programs, when montage is born in the design of literature drama, it is embodied in shooting script In, it finalizes a text on cutting table.However, this set process, which is difficult to apply, is being compiled as films and television programs for existing mass data.
Neuscience discovery, the imagination experience of human brain mood and actual experience are similar.About learning and Memory mechanism The study found that memory when the effect of composite information together coded memory is more preferable.Especially recall info and self The memory effect generated when being associated is better than other information condition.The advantage of self this reference effect is mainly reflected in return Recall in the reaction that experience is characterized.When contacting new thing, if there are substantial connection in it and we itself, just it is not easy to forget Note.In simple terms: people can be more concerned about thing related with oneself.
In mobile augmented reality, the technology of the core tracking registration technique that is target object the most, which can be segmented For 2 types: tracking registration technique, the non-vision tracking registration technique of view-based access control model.Wherein, the tracking registration of view-based access control model Technology can be divided into the tracking registration technique for having marker and the tracking registration technique without marker (based on physical feature) again.Mesh Before, the augmented reality method based on no marker tracking registration technique mainly has SLAM (Simultaneous Localization And Mapping), PTAM (Parallel Tracking and Mapping) etc..
SLAM (simultaneous localization and mapping) is that a kind of position immediately is calculated with map structuring Method.The Project Tango of Google is a kind of SLAM equipment, and it includes motion tracking (Motion Tracking), depth perceptions (Depth Perception) and three core technologies of regional learning (Area Learning) bring one kind for mobile platform Completely new spatial perception experience.However, Project Tango is built based on cell phone platform, holding blocking for mobile phone generation makes Project Tango is only capable of using single camera vision system, it is difficult to more infrared sensors be arranged and camera acquisition is bigger Visual angle.
Summary of the invention
To solve the above problems, appropriately capturing information needed from the big data video information of socialization, joined based on self The application such as assisted learning, memory, tour guide, game, scene walkthrough and augmented reality, mixed reality is realized according to effect, obtains one Big data video fusion editing of the kind based on tracking registration technique and wireless self-networking to multichannel, multi-angle of view, passes through inference machine The image expert system of movie and video programs related to individual is generated by computer automation to knowledge base.It is more that the invention discloses one kind Channel multi-angle of view big data video clipping method.
The technical solution adopted by the present invention to solve the technical problems is:
A kind of multichannel multi-angle of view big data video clipping method, including multichannel, multi-angle of view big data video source, network Layer, server and movie editing expert system.It is characterized in that: registration technique is tracked based on target object, in multichannel, multi-angle of view The video data of target object is extracted in identification in big data video source, and by network layer by above-mentioned data transmission to server, By movie editing expert system editing and processing, pass through computer automation generation augmented reality relevant to target object or mixing The films and television programs of reality.
It is described based on target object tracking registration technique be that the tracking registration technique of view-based access control model or non-vision tracking are matched Quasi- technology.The tracking registration technique of the view-based access control model is that have the tracking registration technique of marker or track registration skill without marker Art.The no marker tracking registration technique be SLAM (Simultaneous Localization And Mapping) or PTAM(Parallel Tracking and Mapping).Preferably, in mobile tracking registration, the present invention is selected with SLAM Tracking registration technique solves its technical problem, and used technical solution is:
1, it when video source is more than one, and is registrated target object based on SLAM tracking, is compiled based on SLAM in video source One bearing mark code of definition is used to position immediately during decoding is regular, Video stream information is interactive and editing.
1.1, the bearing mark code is based on SLAM and carries out direction and Attitude Calculation, definition shooting video source camera and its position Set and direction, the conversion of mark world coordinate system to camera coordinate system, camera coordinate system to imaging plane coordinate system turn Change and imaging plane coordinate system to image coordinate system conversion.When shooting video source camera is static, bearing mark code includes: Camera identification code, camera coordinates, camera orientation angle, left and right inclination angle, pitch angles and technique for taking parameter.When shooting video source When camera motion, bearing mark code includes: camera identification code and runs with the camera based on timestamp that camera displacement generates Route, camera orientation angle, left and right inclination angle, pitch angles and technique for taking parameter delta data.
1.2, being arranged based on the SLAM visual sensor for defining bearing mark code is single camera vision system, binocular vision system System, multi-vision visual system or overall view visual system.
1.3, when defining the visual sensor of bearing mark code based on SLAM is overall view visual system, in bearing mark code Middle camera coordinates are identified with spherical coordinate system.
2, the network layer is a protocol layer, is responsible for the foundation and maintenance of wireless self-networking, and network layer passes through interface Management service and data service are provided for movie editing expert system, movie editing expert system is located at wireless self-networking protocol stack The top of frame.Network layer data connection server database and undertake network information database manage and maintain work.
2.1, when video source is more than one, it is fixed in video source encoding and decoding rule that the wireless self-networking is based on SLAM The bearing mark code of justice, by network protocol make multichannel from multiple video source nodes, multi-angle of view big data video between Directly or multi-hop communication, make between node it is automatic, quickly, a special short distance mobile radio network dynamically setting up, Settable its of each node has both terminal system and transistroute function simultaneously in this network, both can may be used also with data transmit-receive With multi-hop transmission.
2.2, the multiple video source node dynamic group net active swap bearing mark code information, and according to bearing mark code Information judges the third party's video source node for being suitble to this node orientation, angle and distance, is sent out according to the interaction protocol of setting to it Play video data call request.Third party's video source node counts greatly according to the bearing mark code information of request originator from local Go out relative video data according to editing real-time in video to send by wireless self-networking.And so on, multiple video source sections Point is automatic, quickly, interim, dynamic group net, and be based on network layer alternating transmission video data relevant to this node.
2.3, a kind of side that this node Yu third party's video source node are judged based on SLAM, according to bearing mark code information The method of position, angle and distance is: by the bearing mark code of each node known, calculating the three-dimensional of third party's video source node Relationship between space coordinate and this node coordinate show that obtaining third party's video source nodal coordinate system and this nodal coordinate system puts down Rotational transformation matrix is moved, to obtain space coordinate transformation matrix.
2.4, a kind of wireless self-networking is point-to-point Ad Hoc wireless self-networking, and a kind of wireless self-networking is wireless Mesh network.A kind of wireless self-networking is the wireless self-networking based on Zigbee protocol.A kind of wireless self-networking is based on WiFi Wireless self-networking, a kind of wireless self-networking based on WiFi is the direct connection networking based on Wi-Fi hotspot, a kind of based on WiFi's Wireless self-networking is HaLow wireless self-networking.A kind of wireless self-networking is the wireless self-networking based on bluetooth, and one kind being based on bluetooth Wireless self-networking be the wireless self-networking based on low-power consumption bluetooth BLE, a kind of wireless self-networking based on bluetooth is to be based on The point-to-point wireless self-networking of bluetooth of Multipeer Connectivity frame, a kind of wireless self-networking based on bluetooth are bases In the wireless self-networking of iBeacon.A kind of wireless self-networking is the Ad Hoc wireless self-networking based on ultra wide band UWB.A kind of nothing Line ad hoc network is the wireless self-networking based on ultra wide band UWB and bluetooth mixed communication.A kind of wireless self-networking be based on WIFI and The wireless self-networking of bluetooth mixed communication.A kind of wireless self-networking is wireless from group based on WIFI and ZigBee mixed communication Net.
When wireless self-networking be based on or mixing WiFi technology wireless self-networking when, a kind of global function node still couples The across a network node of wireless self-networking and internet.
3, setting movie editing expert system includes human-computer interaction interface, video frequency searching engine, inference machine, knowledge base, work Make memory and interpreter.
3.1, the movie editing expert system is that have the Computing Intelligence of drama, director and editing special knowledge and experience Can programming system, the modeling of ability is solved the problem of by human expert, using the representation of knowledge and knowledge in artificial intelligence Inference technology come simulate usually by expert solve challenge.The movie editing expert system is regarded with heuristic interaction process Knowledge and control are separated, to handle uncertain problem, reach acceptable solution by frequency symbol.
3.2, the video frequency searching engine regards the big data from local and network layer by intelligent video retrieval technique Frequency real time filtering, detection, identification, classification and multiple target tracking, according to the time, scape not, image, intonation, emotion, mood shape State and judgement to interpersonal relationships, automatically by video source cut into it is a series of comprising index identification label, semantics identity label, Scape not Shi Bie label or editing identification label story board.
The algorithm of story board processing is target detection, target following, target identification, behavioural analysis or based on content Video frequency searching and data fusion.
Based on intelligent video retrieval technique, video frequency searching includes characteristic extracting module, Video segmentation module, filtering and video Steady picture mould group, intelligent retrieval matching module.
3.3, the inference machine is to realize the component based on drama plot, director and editing knowledge reasoning, and main includes pushing away Two parts of reason and control, it is the program explained to knowledge, according to the semanteme of knowledge, is known what is strategically found Knowledge explains execution, and result is recorded in the appropriate space of working storage.
A kind of inference logic of inference machine is classical logic.A kind of classical logic is deductive logic, and a kind of classical logic is Inductive logic.A kind of inference logic of inference machine is non-classicol logic, and a kind of non-classicol logic is dialectical logic.
A kind of working method of inference machine is to deduct, and setting several is known by what video frequency searching engine provided comprising editing The camera lens of distinguishing label derives that video display structure, plot understand or scene is planned as the known fact, according to axiomatics;
Alternatively, a kind of working method of inference machine is non-monotonic reasoning, the non-monotonic reasoning includes being based on default information Default reasoning and constraint reasoning.The default reasoning logic is: and if only if no facts proved that camera lens S editing identifies label When invalid, what S was always set up.The logic of the constraint reasoning is: and if only if no facts proved that camera lens S editing identification mark When label are set up in a wider context, S is only set up in specified range.
Alternatively, a kind of working method of inference machine is qualitative reasoning, the qualitative reasoning is straight from physical system or the mankind's It sees thinking to set out, behavior description is exported, so as to the behavior of forecasting system.Qualitative reasoning, which uses, in movie editing expert system divides The partial structurtes rule of camera lens is planned come the plot understanding for predicting film or scene.
3.4, the knowledge base is the set of play writing, director and editing domain knowledge, including brass tacks, rule and Other are for information about.Knowledge base is independent from each other with System program, and user can be by changing, improving in knowledge base Knowledge content improves the performance of expert system.A kind of knowledge base is by scenarist, director and editing expert and right Existing films and television programs, musical works, literary works, 3D model, the deep learning of picture works, unsupervised learning, transfer learning Or multi-task learning building.
3.5, the working storage is to reflect the set of current problem solving state, in storage system operational process Generated all information and required initial data.The set of the current problem solving state includes the letter of user's input The record of breath, the intermediate result of reasoning, reasoning process.The state being made of in working storage brass tacks, proposition and relationship, It is both that inference machine selects the foundation of knowledge and the source of explanation facility acquisition Induction matrix.
3.6, the interpreter is for explaining to solution procedure, and answers the enquirement of user.Allow user's prehension program What is doing and why is doing so.
3.7, movie editing expert system divides mirror to through video frequency searching engine filtering index by inference machine combination knowledge base Head is created again, is recognized calculation automation by server and is generated movie and video programs.Its working method is:
Story board is transferred very elegantTMIntelligence claps expert system.By very elegantTMIntelligence is clapped the inference machine combination plot of expert system, is led It drills and plot, video display, the scene, sound, picture, text, 3D mould for including in knowledge base is called in editing knowledge base rule, selection The materials such as type connect story board choice, decomposition with group, to be made by user's original video that computer automatically generates acceptable solution Product.
A, network layer is based on timestamp and presses all local video sources relevant to this node of setting time segment call and third Square video source;
B, the video source bad based on image recognition technology filter quality;
C, comprehensive video search engine and bearing mark code and coordinates matrix identification code in video source divide mirror for video source Head and index, the scape for defining story board are other;
D, story board is advanced optimized based on Video Stabilization algorithm;
E, story board and natural semanteme are contacted based on video frequency searching engine.
F, story board of the inference machine based on several natural semantizations, is calculated, and call knowledge in conjunction with knowledge base rule The materials such as the video display, sound, picture, text, the 3D model that include in library by calculating logical AND story board assemble editing, thus by Computer automatically generates the films and television programs of acceptable solution.
G, it is set according to human-computer interaction interface, the films and television programs that computer is automatically generated are stored in working storage, hand Machine or Dropbox, or it is transmitted to video website, social media, mailbox, or play in real time as Streaming Media.
4, to include more information in short-sighted frequency, meet the aspiration to knowledge of people.A kind of multichannel multi-angle of view big data view Frequency clipping method is provided with video hyperlink function, and method is:
4.1, a hyperlink label is defined in video source encoding and decoding rule based on SLAM.
4.2, the encoding and decoding rule for video display pattern, picture, text and the 3D model material for including for knowledge base defines one Hyperlink label;
4.3, the hyperlink label is the company that the short-sighted frequency of another target is directed toward from the specific content of a short-sighted frequency Relationship is connect, one is shown as when video source plays can recognize triggerable hot zone.A kind of hot zone is in video source Define what an individual layer was realized in encoding and decoding rule;
4.4, when controlling triggering hot zone with mouse, touch, gesture control or eye movement, then player, which jumps, is presented the hot spot Short-sighted frequency, subtitle, sound or the 3D material of area's hyperlink tag definition.
4.5, when video playback apparatus is screen, the information that the hyperlink label in video calls is to be in same picture Show the internal links of picture-in-picture or in same picture redirect broadcasting external linkage;When video playback apparatus is VR playback equipment, The information that hyperlink label in video calls is to be in the picture-in-picture internal links of present viewing field presentation or another visual angle of switching Existing visual angle link.
5, current panorama system uses the video source of single point of observation.When same place includes multiple panorama system video sources When, a kind of multichannel multi-angle of view big data video clipping method is provided with the scene walkthrough for switching other people visual angle browsings in real time Function, method are:
5.1, when several video source dynamic group nets, one is distributed only for each video source in network at that time based on SLAM One bearing mark code.
5.2, be shown as on the VR browser or video-frequency monitor of the bearing mark code in a network one can recognize can The hot zone of triggering is shown as a recognizable triggerable hot spot on network layer map.The network layer map is wireless The real-time dynamical state figure of all member's distributions in ad hoc network;
5.3, when controlling triggering hot zone or hot spot with mouse, touch, gesture control or eye movement, then local node is to being touched The hot zone of hair or third party's video source node of hotspot's definition initiate video data call request, through third party's video source node After allowing or being allowed according to network protocol, it is real that third party's video source node is played in VR browser or video-frequency monitor switching When the video that the shoots or video for playing third party's video source node definition, realize the real-time scene roaming with other people visual angle browsings Function.
A kind of multichannel multi-angle of view big data video clipping method can be realized by Mobile Shape ' with single camera vision system Basic function.However, overall view visual system, which can be brought, preferably shares experience.It, will to be adapted to overall view visual system working method The attention of people is freed from shooting work.A kind of panoramic camera based on SLAM of disclosure of the invention, it is described to be based on The panoramic camera of SLAM is the equipment for being configured with SLAM module in panoramic camera, and by one group or more of data-interface and One group or more of interface module is connect with smart phone.
The interface module is Quick demounting buckle, quickly to connect or deconstruct the intelligence for the interface module for being provided with adaptation Mobile phone, foot prop, unmanned plane, vehicle head, folding handle, the helmet, cap, headwear, harness, waistband or bracelet.To support vehicle Load mode, wearing mode or hand-held mode carry out foolproof shooting.
The panoramic camera based on SLAM is based on SLAM and defines bearing mark code, the camera coordinates in bearing mark code It is identified with spherical coordinate system, as the reference coordinates of target object tracking registration, and dynamically with multiple video source nodes in agreement Networking active swap bearing mark code information judges to be suitble to target object orientation, angle and distance according to bearing mark code information Third party's video source node, according to the interaction protocol of setting to its initiate video data call request.And it will by network layer The data transmission of acquisition is to server, or the data transmission that will acquire by data-interface is to smart phone, special by movie editing Family system editing and processing.
Panoramic camera based on SLAM include shell, SLAM module, pick-up lens more than two, vision coprocessor, Infrared transmitter, depth of field inductor, thermal camera, gyroscope/acceleration sensor, working storage, wireless self-networking module, Data-interface, interface module and battery.When pick-up lens is two groups, setting pick-up lens is fish eye lens.Work as pick-up lens When being six groups, left-right and front-back under setting pick-up lens faces upward.
Preferably, a kind of panoramic camera based on SLAM of wearable mode, is connected to cap by interface module Top, cap manufactures with flexible battery.A kind of flexible battery is thin-film solar cells.
The beneficial effects of the present invention are:
The invention discloses a kind of multichannel multi-angle of view big data video clipping methods.
Invention can be interacted from the big data video information of socialization captures video information relevant to itself, and by pushing away Reason machine and knowledge base are generated the movie and video programs with user for leading role by computer automation.
A kind of panoramic camera based on SLAM of disclosure of the invention carries out panoramic video record based on SLAM, in terms of facilitating Bearing mark code is calculated, be mixed with the tracking registration technique of marker to a certain extent and tracks registration technique without marker, To obtain more accurate target object tracking registration effect, it is only necessary to carry, without specially shooting, shadow can be passed through Depending on the data of artificial intelligence other socialization big data source of video information interactions from the data and agreement that its own shoots It is middle to generate significant video clip.
Invention let us can be absorbed in it is excellent itself, and realize scene walkthrough, assisted learning, memory, tour guide, game, field Scape roaming, cooperate at film, concert, athletic competition, education, tourism, extreme sport, the news record, game, scene etc. Multiple content zones realize the application such as augmented reality, mixed reality.
Detailed description of the invention
Present invention will be further explained below with reference to the attached drawings and examples.
Fig. 1 is workflow schematic diagram of the present invention,
Fig. 2 is the video interactive schematic diagram the present invention is based on wireless self-networking,
Specific embodiment:
It is as shown in Figs. 1-2:
In embodiment, realize that basic multichannel, multi-angle of view big data video are cut by mobile phone with single camera vision system The method collected is based on the foundation location technology that known mobile phone can be provided, to track and be registrated to target object, and be based on known hand The single camera vision system of machine carries out shooting direction and Attitude Calculation, defines a bearing mark code in video source encoding and decoding rule For positioning immediately, Video stream information interaction and editing.Bearing mark code includes: camera identification code, camera coordinates, camera orientation Angle, left and right inclination angle, pitch angles and technique for taking parameter.When shooting video source camera motion, bearing mark code includes: phase It is machine identification code and the camera running route based on timestamp generated with camera displacement, camera orientation angle, left and right inclination angle, preceding The delta data of pitch angle and technique for taking parameter afterwards.When video source is more than one, the shooting of mobile phone single camera vision system Data are transferred to server or the movie editing expert system by mobile phone by network layer and other video source interactive information Editing and processing.
More than one video source is other video data sources in network layer protocol.
In embodiment, a kind of video source is that the activity companion that goes with mutually shoots the data of generation, these data pass through hair Bright the method screens in real time, real-time, interactive, real-time editing, without taking great energy selection exchange again afterwards.
In another embodiment, a kind of video source is the data that the monitoring device shooting in network layer protocol generates, and allows prison Controlling equipment becomes the photographer of record life.
In another embodiment, a kind of video source is the big number of socialization video for being arbitrarily ready to share in network layer protocol According to.The massive video that let us takes a lot of trouble shooting is no longer sunk into sleep hard disk, and utilization can be transmitted.
In embodiment, for by the attention of people from shooting work in free, be absorbed in it is excellent itself, optimization meter The speed of bearing mark code is calculated, more accurate tracking is registrated target object.Invention also discloses a kind of panorama based on SLAM Video camera.The panoramic camera based on SLAM is the equipment for being configured with SLAM module in panoramic camera, and passes through one group The interface module of above data-interface and one group or more is connect with smart phone.
The panoramic camera based on SLAM is based on SLAM and defines bearing mark code, the camera coordinates in bearing mark code It is identified with spherical coordinate system, as the reference coordinates of target object tracking registration, and dynamically with multiple video source nodes in agreement Networking active swap bearing mark code information judges to be suitble to target object orientation, angle and distance according to bearing mark code information Third party's video source node, according to the interaction protocol of setting to its initiate video data call request.And it will by network layer The data transmission of acquisition is to server, or the data transmission that will acquire by data-interface is to smart phone, special by movie editing Family system editing and processing.
In embodiment, movie editing expert system by intelligent video retrieval technique to from local based on the complete of SLAM The big data video real time filtering of the panoramic camera of scape video camera and network layer based on SLAM, detection, identification, classification and more Target following, according to time, scape not, image, intonation, emotion, emotional state and to the judgement of interpersonal relationships, automatically by video Source cuts into a series of story boards comprising index identification label.
Story board is transferred movie editing expert system.By the inference machine combination plot of movie editing expert system, director And plot, video display, the scene, sound, picture, text, 3D model for including in knowledge base are called in editing knowledge base rule, selection Equal materials connect story board choice, decomposition with group, to be made by user's original video that computer automatically generates acceptable solution Product.
In embodiment, a kind of multichannel multi-angle of view big data video clipping method application scenarios include:
Mixed reality travel notes:
Neuscience discovery, the imagination experience of human brain mood and actual experience are similar.We only see in tourism One jiao of moment.Go on a tour route and the scape that lives through of a kind of multichannel multi-angle of view big data video clipping method identification user Point, mixed in the panoramic camera source video based on SLAM of user related sight spot more comprehensively, more polynary, more good camera lens Or material, it is trimmed into the small video that can share in social media.The width of Tourist Experience is expanded with imagination experience and psychology substitution Degree.
Accepting study:
To learning and Memory the study found that the information with our itself substantial connection is not easy to forget.A kind of multichannel Multi-angle of view big data video clipping method passes through target detection, target following, target identification, behavior in the video that you are leading role Analysis or content based video retrieval system set image, and define a hyperlink label layer for it.When with mouse, touch, hand Gesture control or eye movement control triggering hyperlink label then jump presentation relevant background knowledge, and linear video playing is become netted Accepting study.
Panoramic video is interactive to be implanted into plot:
From take a picture it is easy, certainly shoot the video difficult, video later period flim editing and making be also ordinary user high threshold.A kind of multi-pass Road multi-angle of view big data video clipping method allows companion to pass through the video at the interactive to one's name visual angle of wireless self-networking.And pass through shadow Depending on artificial intelligence the video clipping from multichannel, multi-angle of view at mixed reality short-movie.Traditional films and television programs from drama to point Camera lens, to being clipped to works, and a kind of multichannel multi-angle of view big data video clipping method is from big data story board, by pushing away Reason machine combination video display knowledge base is implanted into plot, conflict and drama by unsupervised learning, automatically generates the interesting of acceptable solution Works.
Displaying marketing and LBS game:
When the monitoring system and a kind of multichannel multi-angle of view big data video clipping method user networking of enterprise, complete specified Task gives setting reward, so that it may allow the monitoring data be sunk into sleep with a kind of multichannel multi-angle of view big data video clipping of customer Method is from broadcasting media.The fixed assets put into are vitalized as marketing tool and LBS game item.
Mobile monitor:
The conversion of this security protection tool and game item is reversible.Panoramic camera based on SLAM is also a kind of random Mobile monitor.
Scene walkthrough:
The panoramic camera based on SLAM in wireless self-networking can call other based on the complete of SLAM according to network protocol The visual field of scape video camera switches other people visual angle browsings in real time, realizes scene walkthrough, see seen in me.Thus in terms of the visual angle of singer Concert watches a ball game from the visual angle of judge, sees working site from multiple visual angles ... and virtually undergo the thing that others undergoes.
Above embodiments are only to illustrate the present invention and are not intended to limit the present invention described technical solution;Therefore, although originally Specification has elaborated the present invention referring to above-described embodiment, but those of ordinary skill in the art are appreciated that Still it can modify to the present invention, equivalent replacement or permutation and combination;And all do not depart from the skill of spirit and scope of the invention Art scheme and its improvement should all be covered among scope of the presently claimed invention.

Claims (5)

1. a kind of multichannel multi-angle of view big data video clipping method, including multichannel, multi-angle of view big data video source, network Layer, server and movie editing expert system;It is characterized in that:
It is one or more that video source, which is arranged,;
Registration technique is tracked based on target object, target object is extracted in identification in multichannel, multi-angle of view big data video source Video data, and by network layer by above-mentioned data transmission to server, by movie editing expert system editing and processing, by pushing away Reason machine and knowledge base are by computer automation generation augmented reality relevant to target object or the films and television programs of mixed reality;
The network layer is a protocol layer, is responsible for the foundation and maintenance of wireless self-networking, and network layer is video display by interface Editing expert system provides management service and data service, movie editing expert system are located at wireless self-networking agreement stack frame Top;Network layer data connection server database and undertake network information database manage and maintain work;
It is described that the tracking registration technique or non-vision tracking registration skill that registration technique is view-based access control model are tracked based on target object Art;The tracking registration technique of the view-based access control model is that have the tracking registration technique of marker or track registration technique without marker; The no marker tracking registration technique includes SLAM(Simultaneous Localization And Mapping) or PTAM (Parallel Tracking and Mapping);
When video source is based on SLAM tracking registration target object, a side is defined in video source encoding and decoding rule based on SLAM Bit identification code is for positioning immediately, Video stream information interaction and editing;
Setting movie editing expert system includes human-computer interaction interface, video frequency searching engine, inference machine, knowledge base, work storage Device and interpreter;
Automatic between several video sources, quick, dynamic group net, and according to the bearing mark defined in video source encoding and decoding rule Code interactive video information;The video frequency searching engine is by intelligent video retrieval technique to the big data from local and network layer Video real time filtering, detection, identification, classification and multiple target tracking, according to the time, scape not, image, intonation, emotion, mood shape State and judgement to interpersonal relationships, automatically by video source cut into it is a series of comprising index identification label, semantics identity label, Scape not Shi Bie label or editing identification label story board;Movie editing expert system is by inference machine combination knowledge base to through video The story board of search engine filtering index is created again, and is recognized calculation automation by server and generated movie and video programs.
2. a kind of multichannel multi-angle of view big data video clipping method according to claim 1, it is characterized in that:
2.1, the bearing mark code is based on SLAM and carries out direction and Attitude Calculation, definition shooting video source camera and its position and Direction, mark world coordinate system to the conversion of camera coordinate system, camera coordinate system to imaging plane coordinate system conversion with And imaging plane coordinate system is to the conversion of image coordinate system;When shooting video source camera is static, bearing mark code includes: camera Identification code, camera coordinates, camera orientation angle, left and right inclination angle, pitch angles and technique for taking parameter;When shooting video source camera When movement, bearing mark code includes: camera identification code and runs road with the camera based on timestamp that camera displacement generates Line, camera orientation angle, left and right inclination angle, pitch angles and technique for taking parameter delta data;
2.2, setting based on the SLAM visual sensor for defining bearing mark code be single camera vision system, it is binocular vision system, more Mesh vision system or overall view visual system;
2.3, when defining the visual sensor of bearing mark code based on SLAM is overall view visual system, the phase in bearing mark code Machine coordinate is identified with spherical coordinate system;
2.4, when video source is more than one, the wireless self-networking is based on SLAM defined in the video source encoding and decoding rule Bearing mark code, by network protocol make multichannel from multiple video source nodes, multi-angle of view big data video between directly Or multi-hop communication, make between node it is automatic, quickly, a special short distance mobile radio network dynamically setting up, net herein Settable its of each node has both terminal system and transistroute function simultaneously in network, both can be with data transmit-receive, can also be more Jump hair;
2.5, the multiple video source node dynamic group net active swap bearing mark code information, and according to bearing mark code information Judgement is suitble to third party's video source node of this node orientation, angle and distance, initiates to regard to it according to the interaction protocol of setting Frequency is according to call request;Third party's video source node is regarded according to the bearing mark code information of request originator from local big data Real-time editing goes out relative video data and is sent by wireless self-networking in frequency;And so on, multiple video source nodes are certainly Dynamic, quick, interim, dynamic group net, and it is based on network layer alternating transmission video data relevant to this node;
2.6, a kind of orientation, angle that this node Yu third party's video source node are judged based on SLAM, according to bearing mark code information Spend is with the method for distance: by the bearing mark code of each node known, calculating the three-dimensional space of third party's video source node Relationship between coordinate and this node coordinate show that obtaining third party's video source nodal coordinate system and the translation of this nodal coordinate system revolves Turn transformation matrix, to obtain space coordinate transformation matrix;
2.7, a kind of wireless self-networking is point-to-point Ad Hoc wireless self-networking, and a kind of wireless self-networking is wireless mesh Network;A kind of wireless self-networking is the wireless self-networking based on Zigbee protocol;A kind of wireless self-networking is the nothing based on WiFi Line ad hoc network, a kind of wireless self-networking based on WiFi are the direct connection networkings based on Wi-Fi hotspot, a kind of based on the wireless of WiFi Ad hoc network is HaLow wireless self-networking;A kind of wireless self-networking is the wireless self-networking based on bluetooth, a kind of nothing based on bluetooth Line ad hoc network is the wireless self-networking based on low-power consumption bluetooth BLE, and a kind of wireless self-networking based on bluetooth is to be based on The point-to-point wireless self-networking of bluetooth of Multipeer Connectivity frame, a kind of wireless self-networking based on bluetooth are bases In the wireless self-networking of iBeacon;A kind of wireless self-networking is the Ad Hoc wireless self-networking based on ultra wide band UWB;A kind of nothing Line ad hoc network is the wireless self-networking based on ultra wide band UWB and bluetooth mixed communication;A kind of wireless self-networking be based on WIFI and The wireless self-networking of bluetooth mixed communication;A kind of wireless self-networking is wireless from group based on WIFI and ZigBee mixed communication Net;
When wireless self-networking be based on or mixing WiFi technology wireless self-networking when, a kind of global function node still couples wirelessly The across a network node of ad hoc network and internet.
3. a kind of multichannel multi-angle of view big data video clipping method according to claim 1, it is characterized in that:
3.1, the movie editing expert system is the Computing Intelligence energy range with drama, director and editing special knowledge and experience Sequence system solves the modeling of ability the problem of by human expert, using the representation of knowledge and knowledge reasoning in artificial intelligence Technology come simulate usually by expert solve challenge;The movie editing expert system is accorded with heuristic interaction process video Number, knowledge and control are separated, to handle uncertain problem, reach acceptable solution;
3.2, the Processing Algorithm of the story board is target detection, target following, target identification, behavioural analysis or based on content Video frequency searching and data fusion;
Based on intelligent video retrieval technique, video frequency searching includes characteristic extracting module, Video segmentation module, filtering and Video Stabilization Mould group, intelligent retrieval matching module;
3.3, the inference machine is to realize the component based on drama plot, director and editing knowledge reasoning, mainly include reasoning and Control two parts, it is the program explained to knowledge, according to the semanteme of knowledge, to the knowledge strategically found into Row, which is explained, to be executed, and result is recorded in the appropriate space of working storage;
A kind of inference logic of inference machine is classical logic;A kind of classical logic is deductive logic, and a kind of classical logic is to conclude Logic;A kind of inference logic of inference machine is non-classicol logic, and a kind of non-classicol logic is dialectical logic;
A kind of working method of inference machine is to deduct, and setting is provided several comprising editing identification mark by video frequency searching engine The camera lens of label derives that video display structure, plot understand or scene is planned as the known fact, according to axiomatics;
Alternatively, a kind of working method of inference machine is non-monotonic reasoning, the non-monotonic reasoning includes silent based on default information Recognize reasoning and constraint reasoning;The default reasoning logic is: and if only if it is facts proved that camera lens S editing identification label not at Immediately, S is always set up;The logic of the constraint reasoning is: and if only if no facts proved that camera lens S editing identification label exists When setting up in larger scope, S is only set up in specified range;
Alternatively, a kind of working method of inference machine is qualitative reasoning, the qualitative reasoning is from physical system or the intuitive think of of the mankind Dimension is set out, and behavior description is exported, so as to the behavior of forecasting system;Qualitative reasoning uses story board in movie editing expert system Partial structurtes rule come predict film plot understand or scene planning;
3.4, the knowledge base is the set of play writing, director and editing domain knowledge, including brass tacks, rule and other For information about;Knowledge base is independent from each other with System program, and user can be by changing, improving the knowledge in knowledge base Content improves the performance of expert system;A kind of knowledge base is by scenarist, director and editing expert, and to existing Films and television programs, musical works, literary works, 3D model, the deep learning of picture works, unsupervised learning, transfer learning or more Tasking learning building;
3.5, the working storage is to reflect the set of current problem solving state, for being produced in storage system operational process Raw all information and required initial data, the information of the current problem solving state gathered including user's input, The record of the intermediate result of reasoning, reasoning process;The state being made of in working storage brass tacks, proposition and relationship, both It is that inference machine selects the foundation of knowledge and the source of explanation facility acquisition Induction matrix;
3.6, the interpreter is for explaining to solution procedure, and answers the enquirement of user;Allow user's prehension program What does and why does so;
3.7, movie editing expert system recognizes calculation automation by server and generates movie and video programs, and working method is:
A, network layer is based on timestamp by all local video sources relevant to this node of setting time segment call and third party's view Frequency source;
B, the video source bad based on image recognition technology filter quality;
C, comprehensive video search engine and bearing mark code and coordinates matrix identification code in video source, be video source story board and Index, the scape for defining story board are other;
D, story board is advanced optimized based on Video Stabilization algorithm;
E, story board and natural semanteme are contacted based on video frequency searching engine;
F, story board of the inference machine based on several natural semantizations, is calculated, and call in knowledge base in conjunction with knowledge base rule The materials such as the video display, sound, picture, text, the 3D model that include by calculating logical AND story board assemble editing, thus by calculating Machine automatically generates the films and television programs of acceptable solution;
G, set according to human-computer interaction interface, the films and television programs that computer is automatically generated be stored in working storage, mobile phone or Dropbox, or it is transmitted to video website, social media, mailbox, or play in real time as Streaming Media.
4. a kind of multichannel multi-angle of view big data video clipping method according to claim 1, it is characterized in that:
A kind of multichannel multi-angle of view big data video clipping method is provided with video hyperlink function, and method is:
4.1, a hyperlink label is defined in video source encoding and decoding rule based on SLAM;
4.2, the encoding and decoding rule for video display pattern, picture, text and the 3D model material for including for knowledge base defines a hyperlink Connect label;
4.3, the hyperlink label is the connection pass that the short-sighted frequency of another target is directed toward from the specific content of a short-sighted frequency System, one is shown as when video source plays can recognize triggerable hot zone, and the hot zone runs through story board or element The broadcasting of material is always;A kind of hot zone is to define an individual layer in the encoding and decoding rule of video source to realize;
4.4, when controlling triggering hot zone with mouse, touch, gesture control or eye movement, then player jumps that the hot zone is presented is super Short-sighted frequency, subtitle, sound or the 3D material that link label defines;
4.5, when video playback apparatus is screen, the information that the hyperlink label in video calls is that picture is presented in same picture Internal links of middle picture or in same picture redirect broadcasting external linkage;When video playback apparatus is VR playback equipment, video In the information called of hyperlink label be that the picture-in-picture internal links presented in present viewing field or another visual angle of switching are presented Visual angle link.
5. a kind of multichannel multi-angle of view big data video clipping method according to claim 1, it is characterized in that:
When same place includes multiple panorama system video sources, a kind of multichannel multi-angle of view big data video clipping method setting There is the scene walkthrough function of switching other people visual angle browsings in real time, method is:
5.1, when several video source dynamic group nets, one is distributed uniquely for each video source in network at that time based on SLAM Bearing mark code;
5.2, be shown as on the VR browser or video-frequency monitor of the bearing mark code in a network one can recognize can trigger Hot zone or be shown as one on network layer map and can recognize triggerable hot spot;The network layer map is wirelessly from group The real-time dynamical state figure of all member's distributions in net;
5.3, when controlling triggering hot zone or hot spot with mouse, touch, gesture control or eye movement, then local node is to being triggered Third party's video source node of hot zone or hotspot's definition initiates video data call request, allows through third party's video source node Or after being allowed according to network protocol, third party's video source node is played in VR browser or video-frequency monitor switching and is clapped in real time The video of the video or broadcasting third party's video source node definition taken the photograph is realized and roams function with other people real-time scenes of visual angle browsing Energy.
CN201610571146.0A 2016-07-20 2016-07-20 A kind of multichannel multi-angle of view big data video clipping method Expired - Fee Related CN106210450B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610571146.0A CN106210450B (en) 2016-07-20 2016-07-20 A kind of multichannel multi-angle of view big data video clipping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610571146.0A CN106210450B (en) 2016-07-20 2016-07-20 A kind of multichannel multi-angle of view big data video clipping method

Publications (2)

Publication Number Publication Date
CN106210450A CN106210450A (en) 2016-12-07
CN106210450B true CN106210450B (en) 2019-01-11

Family

ID=57494611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610571146.0A Expired - Fee Related CN106210450B (en) 2016-07-20 2016-07-20 A kind of multichannel multi-angle of view big data video clipping method

Country Status (1)

Country Link
CN (1) CN106210450B (en)

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108933970B (en) * 2017-05-27 2022-02-25 北京搜狗科技发展有限公司 Video generation method and device
CN107392883B (en) * 2017-08-11 2019-11-08 逄泽沐风 The method and system that video display dramatic conflicts degree calculates
CN107600067B (en) * 2017-09-08 2019-09-20 中山大学 A kind of autonomous parking system and method based on more vision inertial navigation fusions
WO2019075617A1 (en) * 2017-10-16 2019-04-25 深圳市大疆创新科技有限公司 Video processing method, control terminal and mobile device
CN107833236B (en) * 2017-10-31 2020-06-26 中国科学院电子学研究所 Visual positioning system and method combining semantics under dynamic environment
CN108257216A (en) * 2017-12-12 2018-07-06 北京克科技有限公司 A kind of method, apparatus and equipment in reality environment structure physical model
CN111480348B (en) * 2017-12-21 2022-01-07 脸谱公司 System and method for audio-based augmented reality
CN108322771A (en) * 2017-12-22 2018-07-24 新华网股份有限公司 A kind of multimedia clips method and device based on SCR signals
CN109996010B (en) * 2017-12-29 2021-07-27 深圳市优必选科技有限公司 Video processing method and device, intelligent device and storage medium
CN108230337B (en) * 2017-12-31 2020-07-03 厦门大学 Semantic SLAM system implementation method based on mobile terminal
CN108537157B (en) * 2018-03-30 2019-02-12 特斯联(北京)科技有限公司 A kind of video scene judgment method and device based on artificial intelligence classification realization
CN109117850B (en) * 2018-06-28 2020-11-24 上海交通大学 Method for identifying corresponding infrared target image by utilizing visible light target image
CN110009674B (en) * 2019-04-01 2021-04-13 厦门大学 Monocular image depth of field real-time calculation method based on unsupervised depth learning
US11074081B2 (en) * 2019-08-02 2021-07-27 GM Global Technology Operations LLC Architecture and method supporting multiple vision stream using shared server on embedded platform
EP4158550A4 (en) * 2020-05-27 2024-02-14 SRI International Neural network explanation using logic
CN111669515B (en) * 2020-05-30 2021-08-20 华为技术有限公司 Video generation method and related device
CN113380088A (en) * 2021-04-07 2021-09-10 上海中船船舶设计技术国家工程研究中心有限公司 Interactive simulation training support system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093559A (en) * 2007-06-12 2007-12-26 北京科技大学 Method for constructing expert system based on knowledge discovery
KR20100070952A (en) * 2008-12-18 2010-06-28 조선대학교산학협력단 Multimedia content management system
CN101917061A (en) * 2010-07-14 2010-12-15 山东电力集团公司泰安供电公司 Automatic inspection method and device for substation
CN102184200A (en) * 2010-12-13 2011-09-14 中国人民解放军国防科学技术大学 Computer-assisted animation image-text continuity semi-automatic generating method
CN102667855A (en) * 2009-10-19 2012-09-12 Metaio有限公司 Method for determining the pose of a camera and for recognizing an object of a real environment
CN104200422A (en) * 2014-08-28 2014-12-10 邓鑫 Expert system for remote sensing image processing
CN104246656A (en) * 2012-02-23 2014-12-24 谷歌公司 Automatic detection of suggested video edits
CN105224535A (en) * 2014-05-29 2016-01-06 浙江航天长峰科技发展有限公司 Based on the concern target quick reference system of massive video

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070089151A1 (en) * 2001-06-27 2007-04-19 Mci, Llc. Method and system for delivery of digital media experience via common instant communication clients
KR100926760B1 (en) * 2007-12-17 2009-11-16 삼성전자주식회사 Location recognition and mapping method of mobile robot
US8855819B2 (en) * 2008-10-09 2014-10-07 Samsung Electronics Co., Ltd. Method and apparatus for simultaneous localization and mapping of robot
US8643662B2 (en) * 2009-04-22 2014-02-04 Samsung Electronics Co., Ltd. Video entertainment picture quality adjustment
KR101667033B1 (en) * 2010-01-04 2016-10-17 삼성전자 주식회사 Augmented reality service apparatus using location based data and method the same

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101093559A (en) * 2007-06-12 2007-12-26 北京科技大学 Method for constructing expert system based on knowledge discovery
KR20100070952A (en) * 2008-12-18 2010-06-28 조선대학교산학협력단 Multimedia content management system
CN102667855A (en) * 2009-10-19 2012-09-12 Metaio有限公司 Method for determining the pose of a camera and for recognizing an object of a real environment
CN101917061A (en) * 2010-07-14 2010-12-15 山东电力集团公司泰安供电公司 Automatic inspection method and device for substation
CN102184200A (en) * 2010-12-13 2011-09-14 中国人民解放军国防科学技术大学 Computer-assisted animation image-text continuity semi-automatic generating method
CN104246656A (en) * 2012-02-23 2014-12-24 谷歌公司 Automatic detection of suggested video edits
CN105224535A (en) * 2014-05-29 2016-01-06 浙江航天长峰科技发展有限公司 Based on the concern target quick reference system of massive video
CN104200422A (en) * 2014-08-28 2014-12-10 邓鑫 Expert system for remote sensing image processing

Also Published As

Publication number Publication date
CN106210450A (en) 2016-12-07

Similar Documents

Publication Publication Date Title
CN106210450B (en) A kind of multichannel multi-angle of view big data video clipping method
US9870798B2 (en) Interactive real-time video editor and recorder
Del Molino et al. Summarization of egocentric videos: A comprehensive survey
Bolanos et al. Toward storytelling from visual lifelogging: An overview
JP5866728B2 (en) Knowledge information processing server system with image recognition system
JP5898378B2 (en) Information processing apparatus and application execution method
Xiong et al. Storyline representation of egocentric videos with an applications to story-based search
KR20190106863A (en) Equipment utilizing human recognition and method for utilizing the same
Singh et al. Recent evolution of modern datasets for human activity recognition: a deep survey
US20140328570A1 (en) Identifying, describing, and sharing salient events in images and videos
Plizzari et al. An outlook into the future of egocentric vision
Stals et al. UrbanixD: From ethnography to speculative design fictions for the hybrid city
WO2014179749A1 (en) Interactive real-time video editor and recorder
CN108984618A (en) Data processing method and device, electronic equipment and computer readable storage medium
Bødker et al. Tourism sociabilities and place: Challenges and opportunities for design
Wang et al. A survey of museum applied research based on mobile augmented reality
JP2017146644A (en) Search device, search method, and search system
Kimura et al. A design approach for building a digital platform to augment human abilities based on a more-than-human perspective
Kimura et al. Designing innovative digital platforms from both human and nonhuman perspectives
Kirby Digital Space and Embodiment in Contemporary Cinema: Screening Composite Spaces
Takata et al. Modeling and analyzing individual's daily activities using lifelog
CN114979741B (en) Method, device, computer equipment and storage medium for playing video
CN114328990B (en) Image integrity recognition method, device, computer equipment and storage medium
Wang et al. Scene Walk: a non-photorealistic viewing tool for first-person video
Li Robust and efficient activity recognition from videos

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20190111