CN107703956A - A kind of virtual interaction system and its method of work based on inertia capturing technology - Google Patents

A kind of virtual interaction system and its method of work based on inertia capturing technology Download PDF

Info

Publication number
CN107703956A
CN107703956A CN201710897035.3A CN201710897035A CN107703956A CN 107703956 A CN107703956 A CN 107703956A CN 201710897035 A CN201710897035 A CN 201710897035A CN 107703956 A CN107703956 A CN 107703956A
Authority
CN
China
Prior art keywords
image
gondola
action
head
inertia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710897035.3A
Other languages
Chinese (zh)
Inventor
韩元凯
袁弘
许玮
刘继东
慕世友
李超英
高玉明
李云亭
张健
傅孟潮
李建祥
刘海波
黄德旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Intelligent Electrical Branch of Shandong Luneng Software Technology Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd
Shandong Luneng Intelligence Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Shandong Electric Power Co Ltd, Shandong Luneng Intelligence Technology Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN201710897035.3A priority Critical patent/CN107703956A/en
Publication of CN107703956A publication Critical patent/CN107703956A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/08Control of attitude, i.e. control of roll, pitch, or yaw
    • G05D1/0808Control of attitude, i.e. control of roll, pitch, or yaw specially adapted for aircraft
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/10Simultaneous control of position or course in three dimensions
    • G05D1/101Simultaneous control of position or course in three dimensions specially adapted for aircraft
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Abstract

The invention discloses a kind of virtual interaction system and its method of work based on inertia capturing technology, the present invention is combined by using virtual interacting technology and motion capture technology, some module that user is come in positioning scene using inertia action is realized, it is interactive so as to decide whether to carry out with it;The gondola technique of real-time control that research virtual reality is combined with unmanned plane, realize the accurate focusing to electric power line pole tower and intelligent collection;The high speed transmission technology of high-definition image is studied, effective support to the first visual angle inspection work is realized, improves the operating efficiency and work quality in terms of robot inspection, saved cost of human resources, huge economic benefit is brought for detection work.

Description

A kind of virtual interaction system and its method of work based on inertia capturing technology
Technical field
The present invention relates to a kind of virtual interaction system and its method of work based on inertia capturing technology.
Background technology
Man-machine interface in general sense is as two entities being separated from each other, and interface operator and computer As the carrier of information transmission, among operational order can only be input to computer by user, then computer is again corresponding letter Breath is fed back with action.In such a exchange method, user can only contact with computer, and be difficult to carry out with the object studied Spontaneously, communicate with each other on one's own initiative.
Unmanned plane inspection at present has turned into an important means of polling transmission line.During inspection, unmanned plane carries Camera is flown, and by flying flight attitude of the hand on ground by remote control control unmanned plane, is worked by ground control station Personnel control camera angle in unmanned plane by computer and are taken pictures and be stored in the storage card of aircraft.Terminate in flight The image in storage card is reached into backstage afterwards, carries out the judgement of defect by the identification to image from the background.The work is used at present Pattern defect recognition rate is about 70% or so.Distance has man-machine inspection still to have larger gap.By analysis, cause discrimination low The main reason for be:
1st, camera angle, focal length are limited by aircraft flight attitude, more numerous by ground control stand control camera It is trivial, it can not often obtain optimal shooting angle;
2nd, in view of line security the reason for, unmanned plane need to keep at least 30 meters of safe distance with circuit, increase bat The difficulty taken the photograph, it have impact on the definition of photo;
3rd, influenceed by factors such as shooting angle, distance, weather, image definition and resolution ratio can not fully meet automatic knowledge Do not require, directly affects defect recognition rate.
And have man-machine inspection then be using helicopter carry patrol officer, maked an inspection tour by way of human eye+telescope circuit, Identify defect.In such a mode, the tour free degree of people is bigger, and it is accurate that head steering is flexibly, positioning facilitates, and this is all nothing It can not accomplish during man-machine inspection.So if it is flexible that camera shooting can be lifted while unmanned plane during flying mobility is kept Property, it will the inspection efficiency and practicality of unmanned plane is substantially improved.
The content of the invention
The present invention is in order to solve the above problems, it is proposed that a kind of virtual interaction system and its work based on inertia capturing technology Make method, the present invention is combined using virtual interacting technology and motion capture technology, can determined using the dexterous actions of human eye Some module in potential field scape, it is interactive so as to decide whether to carry out with unmanned plane, while realize to the accurate of electric power line pole tower Focusing and intelligent collection, realize effective support to the first visual angle inspection work.
To better illustrate technical scheme, first stated as follows, virtual interaction system is user and surrounding environment work On the whole treated for one, form a virtual interactive environment, user can pass through sense organ, such as vision and tactile, with ring Object within border carries out the interactive action of active.
To achieve these goals, the present invention adopts the following technical scheme that:
A kind of virtual interaction system based on inertia capturing technology, including virtual reality device, action model storehouse, image biography Defeated module, image panorama concatenation module, head real-time control module and human-computer interaction module, wherein:
The virtual reality device, speed, position and the direction of action of moving target are configured to determine that, to catch manipulation The inertia action of person, meanwhile, receive and show the image of human-computer interaction module transmission;
The action model storehouse, it is configured as being stored with the above-below direction action of head/gondola, the left and right of head/gondola Direction action, the far and near direction action of head/gondola, the spinning movement of head/gondola, head/gondola location action and The positioning of head/gondola or the template model of closing motion;
Described image transport module, it is configured as receiving and transmits the transmission line of electricity image of unmanned plane collection;
Described image panoramic mosaic module, it is configured as carrying out feature point extraction, Feature Points Matching to transmission line of electricity image And image co-registration, obtain transmission line status panoramic picture;
The head real-time control module, it is configured as carrying out shaft tower, head and pod the correspondence between coordinate system Relation and Parameter Switch, according to the panoramic picture of passback, image object offset is converted into inertia rotation data, passes through conversion The image shift amount collected is converted into the Mechanical course amount of head and the controlled quentity controlled variable of gondola by parameter;
The human-computer interaction module, it is configured as gathering systematic parameter, real-time video and the defect location of unmanned machine head Information, and these information are subjected to packing transmission according to corresponding form, issue the determination of head real-time control module head or/ With gondola control strategy scheme, intelligentized control method is carried out to it, and feed back information to virtual reality device.
Further, the virtual reality device includes inertia capture module, display module and sensor device, the biography Sensor equipment, including but not limited to gyroscope, accelerometer and/or magnetometer, the current motion state of measurement moving target and Angular speed state;The inertia capture module passes through its speed by being recorded to the motion of moving target in three dimensions The degree physical message related to location parameter acquisition, and then the simulation of progress movement locus, the display module display image or Video information.
Further, described motion state include forward, back, up, downwards, to the left, it is therein a kind of to the right or Person is a variety of, and described angular speed state includes accelerating or slowed down.
Further, the virtual reality device also includes data processing unit, receives inertia capture module and sensor The kinematics information that equipment collects, when target is in motion, change the positional information of moving target, so as to obtain moving target Track, utilize inertial navigation principle complete moving target motion capture.
Further, the data processing unit carries out the transmission of real time data with action model storehouse, reads all shiftings Dynamic action model, shift action model is combined with the action data captured, the matching with model is realized, so as to drive head Motion, be finally model and catch Data Matching, and the action data one that model can and then be caught changes.
Further, described image transport module is compressed to view data changes with form, is ensureing that image is effective Property with frame per second on the premise of, maximized compressed data communication port accounting, at ground receiver end, completed using the mechanism that decodes high The recovery of clear image, in view data transmitting procedure, different control signals is transmitted using different data transmission links.
The virtual reality device is that VR or AR heads show equipment.
Method of work based on said system, comprises the following steps:
(1) kinematics information of moving target is collected using virtual reality device, completes the motion capture of moving target;
(2) spatial attitude of the moving target of determination and action are matched with the action model stored, identifies action The action command that target is sent, and be sent to unmanned plane and control it to complete corresponding act;
(3) unmanned plane performs corresponding action, it is observed that visual line of sight in gather scene image, image is entered Row takes out frame, quantization and decoding process;
(4) synthesis of high pressure shaft tower image is carried out using image mosaic technology, obtains ultrahigh resolution panoramic picture;
(5) Coordinate Conversion is carried out to head or/and gondola, according to the panoramic picture of passback, image object offset turned Change inertia rotation data into, the image shift amount collected is converted into the Mechanical course amount and gondola of head by conversion parameter Controlled quentity controlled variable, with realize keep unmanned plane during flying mobility while lifted IMAQ flexibility and real-time.
Further, in the step (1), the initial data that will collect, including acceleration magnitude and magnitude of angular velocity, utilize Data anastomosing algorithm calculates object space attitude angle, including roll angle, the angle of pitch and/or course angle.
Further, in the step (3), image procossing specifically includes:On the premise of visual effect is not influenceed, to regarding Frequency carries out taking out frame processing;The analog signal of image is quantified and encoded with 256 pixel values, completes data-signal processing, And then to compensation deals in the processing of image progress inverse quantization, infra-frame prediction and frame, recover video data.
Further, in the step (4), feature point extraction is carried out to image, reads ultrahigh resolution high pressure shaft tower figure Picture, and sampling diminution is carried out to image;Ultrahigh resolution image to be spliced is subjected to sampling diminution using bilinear interpolation, it is right All imagery exploitation ORB algorithms after sampling is reduced carry out feature extraction.
Further, in the step (4), closest matching is carried out using the ORB features of extraction, passes through RASANC algorithms Obtained matching double points are screened, obtain thick matching double points;Using the thick matching double points coordinate of extraction, calculate original Respective coordinates in ultrahigh resolution image, and in the image block where the matching double points of original ultrahigh resolution image again ORB features are extracted, are accurately matched.
Further, in the step (4), using being fade-in gradually to go out method and merge ultrahigh resolution adjacent image, obtain To ultrahigh resolution panoramic picture.
Further, in the step (5), including:
(1) Coordinate Conversion:Shaft tower, unmanned aerial vehicle platform, pod are in three coordinate systems, and shaft tower and unmanned plane are put down Platform is combined and changed, unmanned aerial vehicle platform and gondola are combined conversion, so as to three's directly progressive association conversion;
(2) amount of spin is determined:It is accurate to find transmission of electricity according to the tour video information of the existing real-time display of virtual reality device Circuit parts and defective locations, navigate to the position of part and defect, and start virtual unit focusing function, obtain in real time virtual The Inertia information of equipment, it is mapped to by Coordinate Conversion in gondola coordinate system, obtains the Mechanical course amount of gondola;
(3) communication code of gondola Mechanical course amount is carried out based on unmanned aerial vehicle station encoder, controlled quentity controlled variable is by angle, position Offset information is put, horizontal, the vertical and angular velocity information of gondola is converted into, controls link to be uploaded to gondola by data, drive Dynamic gondola and the visible ray collecting device of carrying complete IMAQ.
Further, unmanned aerial vehicle (UAV) control platform configuration works as follows:
(1) open video path and set parameter and video window size, then initiating communications parameter, obtains nobody Machine real-time video resource is simultaneously shown.
(2) by observing the dynamic of real-time video, according to detection pattern and user experience, judge whether to receive current position Put;
(3) technical factor, external factor and the accidentalia of defect location during unmanned plane inspection are analyzed, identification is each Operation flow node may influence the factor of defect location accuracy, formulate countermeasure;
(4) by the thinking habit of user, judge the position of head adjustment and unmanned plane is adjusted by video angle The position of gondola;
(5) according to user action to the control of head travel direction and function control, meanwhile, it is also real-time to the gondola of unmanned plane Control, realize the accurate focusing to electric power line pole tower and intelligent collection.
Compared with prior art, beneficial effects of the present invention are:
(1) present invention improves operating efficiency, cost-effective expenditure, improve operating efficiency in terms of robot inspection and Work quality, cost of human resources is saved, huge economic benefit is brought for detection work.
(2) accelerate capacity for technological innovation, lift application level, capture the key skill of a collection of virtual reality interactive operation Art, perceive, transmission, processing, using etc. technical field obtain important research achievement.
(3) realize that operation is intelligent, it is ensured that fortune inspection safety, promote user to use the smart machines such as robot, unmanned plane Wish, promote the healthy and sustainable development of China's Robot industry, so as to reduce security incident, reduce robot error rate and make Contribution.
(4) optimize Consumer's Experience, promote new technology spread application, user can pass through sense organ (vision and tactile) and environment Within object carry out active interactive action, this interactive mode based on virtual reality, relative in traditional sense Human-computer interaction interface technically make great progress.
Brief description of the drawings
The Figure of description for forming the part of the application is used for providing further understanding of the present application, and the application's shows Meaning property embodiment and its illustrate be used for explain the application, do not form the improper restriction to the application.
Fig. 1 virtual interacting architecture diagrams;
Fig. 2 inertia actions gather and Controlling model flow chart;
The real-time control structure figure of virtual interacting plateform systems of the Fig. 3 based on inertia action capturing technology;
Virtual interacting platform hardware architecture design figures of the Fig. 4 based on inertia action capturing technology;
Fig. 5 H.264 code patterns;
H.264, Fig. 6 decodes figure.
Embodiment:
The invention will be further described with embodiment below in conjunction with the accompanying drawings.
It is noted that described further below is all exemplary, it is intended to provides further instruction to the application.It is unless another Indicate, all technologies used herein and scientific terminology are with usual with the application person of an ordinary skill in the technical field The identical meanings of understanding.
It should be noted that term used herein above is merely to describe embodiment, and be not intended to restricted root According to the illustrative embodiments of the application.As used herein, unless the context clearly indicates otherwise, otherwise singulative It is also intended to include plural form, additionally, it should be understood that, when in this manual using term "comprising" and/or " bag Include " when, it indicates existing characteristics, step, operation, device, component and/or combinations thereof.
In the present invention, term as " on ", " under ", "left", "right", "front", "rear", " vertical ", " level ", " side ", The orientation or position relationship of instructions such as " bottoms " are based on orientation shown in the drawings or position relationship, only to facilitate describing this hair Bright each part or component structure relation and the relative determined, not refer in particular to either component or element in the present invention, it is impossible to understand For limitation of the present invention.
In the present invention, term such as " affixed ", " connected ", " connection " should be interpreted broadly, and expression can be fixedly connected, Can also be integrally connected or be detachably connected;Can be joined directly together, can also be indirectly connected by intermediary.For The related scientific research of this area or technical staff, the concrete meaning of above-mentioned term in the present invention can be determined as the case may be, It is not considered as limiting the invention.
As background technology is introduced, unmanned plane inspection in the prior art be present by angle, focusing and line security Deng the limitation of many factors, for its mode of operation defect recognition rate about 70% or so, distance has man-machine inspection still to have larger difference Away from deficiency, in order to solve technical problem as above, present applicant proposes a kind of virtual interacting system based on inertia capturing technology System and its method of work, the invention provides three-dimensional visible interaction true to nature, intelligent, having interaction, scalability Operating environment;Study VR/AR heads and show equipment to the head operational control technology of high-definition camera, realize that user uses inertia action Some module come in positioning scene, it is interactive so as to decide whether to carry out with it;What research virtual reality was combined with unmanned plane hangs Cabin technique of real-time control, realize the accurate focusing to electric power line pole tower and intelligent collection;The high speed for studying high-definition image passes Transferring technology, realize effective support to the first visual angle inspection work.
A kind of typical embodiment, as shown in figure 1, a kind of virtual interaction system based on inertia capturing technology is user On the whole treated as one with surrounding environment, using hardware facility and the software formation one of correlation it is virtual interact ring Border, user can pass through the interactive action that sense organ (vision and tactile) carries out active with the object within environment.It is characterized in that: VR heads show device, motion capture module, action model storehouse, image transmission module, image panorama concatenation module, head control in real time Module, human-computer interaction module.
The VR heads show device, are a kind of all round computer graph technology, multimedia technology, sensor technology, man-machine friendship The computer neck that a variety of scientific and technical comprehensive developments such as mutual technology, network technology, stereo display technique and emulation technology are got up The latest equipment in domain.
The inertia capture module, it is by being recorded to the motion of object in three dimensions, passing through its speed, position The related physical message of parameter acquiring such as put, and then carry out the simulation of movement locus.Using this technology, target needs are tracked Show in VR heads and integrated accelerometer is worn in equipment, the inertial sensor equipment such as gyroscope and magnetometer, this is the dynamic of a whole set of Make seizure system, it is necessary to which multiple components cooperate, it is made up of inertia device and data processing unit.
The inertial sensor equipment such as gyroscope, accelerometer and magnetometer:For measuring apparatus current motion state and angle Speed state, described motion state include forward, back, up, downwards, to the left, one or more therein to the right, institute The angular speed state stated includes accelerating or slowed down.
Data processing unit, it is the kinematics information collected using inertia device, when target is in motion, these yuan of device The positional information of part is changed, and so as to obtain the track of target motion, can complete to move by inertial navigation principle again afterwards The motion capture of target.
Data processing unit also with action model storehouse carry out real time data transmission, all shift action models we deposit Storage is in action module database, in order to allow the action data captured to drive cradle head control, it would be desirable to by model with catching The action data grasped combines, and realizes the matching with model, is finally model and seizure data so as to drive the motion of head Match somebody with somebody, and the action data that model can and then be caught moves up.
Action model storehouse includes the above-below direction action of head/gondola, head/the left and right directions action of gondola, head/and hung The far and near direction action in cabin, the spinning movement of head/gondola, the positioning of the location action of head/gondola and head/gondola, Closing motion.
Image transmission module, it is by the image document of communication system passback, real-time exhibition during unmanned plane inspection Transmission line status.The usual geographical position of transmission line of electricity is remote, and public communication network coverage rate is poor, and unmanned plane inspection logarithm It is higher according to the quality of transmission, requirement of real-time, data transfer task can not be completed by public communication network.To realize high quality Inspection communication, meet the display demand of VR equipment, the special data communication network of inspection need to be set up, complete the reality of video data When transmit.
To realize the real-time Transmission of high-definition image, view data is compressed and changed with form, ensureing that image is effective Property with frame per second on the premise of, maximized compressed data communication port accounting.At ground receiver end, completed using the mechanism that decodes high The recovery of clear image.In view data transmitting procedure, and data such as unmanned aerial vehicle (UAV) control signal, cradle head control signal, using not Same data transmission link, that is, open up special high-definition image transmission link.When carrying out the display of VR equipment, need to only show airborne Hold the data of downlink.
Image panorama concatenation module, can not once be panned using common digital imaging apparatus and ultrahigh resolution Image.It can smoothly be solved the above problems using image mosaic technology, successfully realize the conjunction of ultrahigh resolution high pressure shaft tower image Into.Panorama Mosaic technology relates generally to feature point extraction, Feature Points Matching and the aspect of image fusion technology three, wherein feature The extraction effect of point directly affects later image splicing effect.
Head real-time control module, shaft tower, unmanned aerial vehicle platform, pod are in three coordinate systems, completed to nothing The control of man-machine gondola, first, calculate corresponding relation and conversion parameter between three coordinate systems;Secondly, according to the image of passback, Image object offset is converted into inertia rotation data by VR equipment;Finally, it is by conversion parameter that the image collected is inclined Shifting amount is converted into the Mechanical course amount of head.Cradle head control amount carries out logic coding by earth station, and by data transmitting Transmit to airborne end, controlled quentity controlled variable is decoded by airborne machine code, completes the control of gondola.
Human-computer interaction module, by the high-speed transfer communication channel of high-definition image gather unmanned machine head systematic parameter, Real-time video and defect location information, and these information are subjected to packing transmission according to corresponding form.Man-machine interactive platform is simultaneously Head/gondola control strategy scheme is issued by the passage, intelligentized control method is carried out to it.
As shown in figure 3, a kind of method of work of the virtual interaction system based on inertia capturing technology, comprises the following steps:
Step 1: inertia action is caught, the data message of inertial sensor equipment is gathered, being tracked target needs in VR heads Integrated accelerometer is worn in aobvious equipment, the inertial sensor equipment such as gyroscope and magnetometer, this is a whole set of motion capture , it is necessary to which multiple components cooperate, it is made up of system inertia device and data processing unit, and data processing unit utilizes used Property the kinematics information that collects of device, when target is in motion, the positional information of these components is changed, so as to obtain mesh The track of motion is marked, the motion capture of moving target can be completed by inertial navigation principle again afterwards.
The initial data collected such as acceleration magnitude, magnitude of angular velocity is transferred in the minds of microprocessor core, by data Blending algorithm calculates object space attitude angle such as roll angle, the angle of pitch, course angle (Eulerian angles representation) etc..Again will be numerous Data summarization at sensor node is transferred in action model storehouse and handled.
Step 2: as shown in Fig. 2 action storage and Model Matching, all shift action models we be stored in action In module database, in order to allow the action data captured to drive cradle head control, it would be desirable to by model and capture dynamic Make data combination, the matching with model is realized, so as to drive the motion of head.Robot is by the information transmission of hazardous environment to control Person processed, effector make various actions according to information, and motion capture system gets off motion capture, are real-time transmitted to robot simultaneously It is controlled to complete same action.
Step 3: the collection and transmission of image, by video acquisition device gather it is observed that visual line of sight in Scene image.The usual geographical position of transmission line of electricity is remote, and public communication network coverage rate is poor, and unmanned plane inspection passes to data Defeated quality, requirement of real-time are higher, can not complete data transfer task by public communication network.To realize patrolling for high quality Inspection communication, meets the display demand of VR equipment, need to set up the special data communication network of inspection, complete the real-time biography of video data It is defeated.
To realize high definition of the transmission line of electricity image in VR equipment, quick displaying, using H.246 Image Compression to height Clear image is compressed coding and decoding with video, as shown in Figure 5, Figure 6, mainly includes:
1) frame processing is taken out:On the premise of visual effect is not influenceed, video is carried out to take out frame processing;
2) image quantization:The analog signal of image is quantified with 256 pixel values, encoded, is completed at data-signal Reason.
3) video decodes:To compensation deals in the processing of image progress inverse quantization, infra-frame prediction and frame, recover video data.
Step 4: splicing and the real-time exhibition of image, can not once be panned using common digital imaging apparatus and The image of ultrahigh resolution.It can smoothly be solved the above problems using image mosaic technology, successfully realize ultrahigh resolution high pressure The synthesis of shaft tower image.
(1) feature point extraction
Ultrahigh resolution high pressure shaft tower image is read first, and sampling diminution is carried out to image;Utilize bilinear interpolation Ultrahigh resolution image to be spliced is subjected to sampling diminution.Then all imagery exploitation ORB algorithms after being reduced to sampling are carried out Feature extraction.ORB features employ Oriented FAST feature point detections operators and Rotated BRIEF Feature Descriptors. ORB algorithms not only have the Detection results of SIFT feature, but also with sides such as rotation, scaling, brightness change consistency The characteristic in face, it is most important that its time complexity has than SIFT greatly to be reduced so that ORB algorithms splice in high-definition image And there is very big application prospect in terms of real time video image splicing.
(2) Feature Points Matching
Closest matching is carried out using the ORB features of extraction, obtained matching double points are sieved by RASANC algorithms Choosing, obtains thick matching double points.Using the thick matching double points coordinate of extraction, the correspondence in original ultrahigh resolution image is calculated Coordinate, and extract ORB features again in the image block where the matching double points of original ultrahigh resolution image, carry out accurate Match somebody with somebody.Finally calculate the transformation matrix H between adjacent image.
(3) image co-registration
Using being fade-in gradually to go out method and merge ultrahigh resolution adjacent image, ultrahigh resolution panoramic picture is obtained, is spelled Binding beam.
Step 5: head controls in real time, the real-time control to head/gondola mainly includes following steps:
(4) Coordinate Conversion.Shaft tower, unmanned aerial vehicle platform, pod are in three coordinate systems, and shaft tower (is sat greatly Mark) and unmanned aerial vehicle platform is combined and changed, unmanned aerial vehicle platform and gondola are combined conversion, so as to which three is directly progressive Association conversion.
(5) amount of spin is determined.Operator shows the tour video information of real-time display according to virtual reality device, accurately It was found that transmission line part and defective locations.The position of part and defect is navigated to, and starts virtual unit focusing function, in real time The Inertia information of virtual unit is obtained, is mapped to by Coordinate Conversion in gondola coordinate system, obtains the Mechanical course amount of gondola.
(6) information transfer.The communication code of gondola Mechanical course amount, controlled quentity controlled variable are carried out based on unmanned aerial vehicle station encoder By angle, position offset information, the horizontal, vertical of gondola, angular velocity information are converted to, controls link to upload by data To gondola, gondola and the visible ray collecting device of carrying is driven to complete IMAQ.
Step 6: the concrete meaning and its working method of human-computer interaction module are as shown in Figure 4:
1) video is shown:The platform starting stage, it is first turned on video path and sets parameter and video window size, so Initiating communications parameter afterwards, obtain unmanned plane real-time video resource and show.
2) information receives:User is by observing the dynamic of real-time video, according to user conventional detection pattern and its experience, Judge whether to receive current position.
3) user's decision-making:Technical factor, external factor, the accidentalia of defect location during analysis unmanned plane inspection Deng identifying that each operation flow node may influence the factor of defect location accuracy, formulate countermeasure.
4) thinking controls:By the thinking habit of user, judge the position of head adjustment and adjusted by video angle The position of whole unmanned plane gondola.
5) cradle head control:The control of head is determined according to the headwork of user, and headwork is including above and below, a left side Right and fore-and-aft direction movement, also include the action of the phenotype such as rotation, fixed, so as to the control of head travel direction, work( Can control.
6) equipment controls:The control of head simultaneously, is also controlled the gondola of unmanned plane, realized to electric power line pole tower in real time Accurate focusing and intelligent gather.
The preferred embodiment of the application is the foregoing is only, is not limited to the application, for the skill of this area For art personnel, the application can have various modifications and variations.It is all within spirit herein and principle, made any repair Change, equivalent substitution, improvement etc., should be included within the protection domain of the application.
Although above-mentioned the embodiment of the present invention is described with reference to accompanying drawing, model not is protected to the present invention The limitation enclosed, one of ordinary skill in the art should be understood that on the basis of technical scheme those skilled in the art are not Need to pay various modifications or deformation that creative work can make still within protection scope of the present invention.

Claims (14)

1. a kind of virtual interaction system based on inertia capturing technology, it is characterized in that:Including virtual reality device, action model Storehouse, image transmission module, image panorama concatenation module, head real-time control module and human-computer interaction module, wherein:
The virtual reality device, speed, position and the direction of action of moving target are configured to determine that, to catch manipulator's Inertia action, meanwhile, receive and show the image of human-computer interaction module transmission;
The action model storehouse, it is configured as being stored with the above-below direction action of head/gondola, the left and right directions of head/gondola Action, head/gondola the action of far and near direction, the spinning movement of head/gondola, the location action of head/gondola and head/ The positioning of gondola or the template model of closing motion;
Described image transport module, it is configured as receiving and transmits the transmission line of electricity image of unmanned plane collection;
Described image panoramic mosaic module, it is configured as carrying out feature point extraction, Feature Points Matching and figure to transmission line of electricity image As fusion, transmission line status panoramic picture is obtained;
The head real-time control module, be configured as shaft tower, head, pod carry out coordinate system corresponding relation with Parameter Switch, according to the panoramic picture of passback, image object offset is converted into inertia rotation data, will by conversion parameter The image shift amount collected is converted into the Mechanical course amount of head and the controlled quentity controlled variable of gondola;
The human-computer interaction module, it is configured as gathering the systematic parameter of unmanned machine head, real-time video and defect location information, And these information are subjected to packing transmission according to corresponding form, issue the head of head real-time control module determination or/and hang Cabin control strategy scheme, intelligentized control method is carried out to it, and feed back information to virtual reality device.
2. a kind of virtual interaction system based on inertia capturing technology as claimed in claim 1, it is characterized in that:It is described virtual existing Real equipment includes inertia capture module, display module and sensor device, the including but not limited to sensor device, gyro Instrument, accelerometer and/or magnetometer, measurement moving target current motion state and angular speed state;The inertia catches mould Block passes through its speed physics related to location parameter acquisition by being recorded to the motion of moving target in three dimensions Information, and then carry out the simulation of movement locus, the display module display image or video information.
3. a kind of virtual interaction system based on inertia capturing technology as claimed in claim 2, it is characterized in that:Described motion State is including one or more therein, described angular speed state include forward, back, up, downwards, to the left, to the right Accelerate or slow down.
4. a kind of virtual interaction system based on inertia capturing technology as claimed in claim 1, it is characterized in that:It is described virtual existing Real equipment also includes data processing unit, receives the kinematics information that inertia capture module and sensor device collect, works as mesh When being marked on motion, change the positional information of moving target, so as to obtain the track of moving target, completed using inertial navigation principle The motion capture of moving target.
5. a kind of virtual interaction system based on inertia capturing technology as claimed in claim 4, it is characterized in that:At the data Manage the transmission that unit and action model storehouse carry out real time data, read all shift action models, by shift action model with The action data captured combines, and realizes the matching with model, is finally model and seizure data so as to drive the motion of head Matching, and the action data one that model can and then be caught change.
6. a kind of virtual interaction system based on inertia capturing technology as claimed in claim 1, it is characterized in that:Described image passes Defeated module is compressed to view data to be changed with form, on the premise of ensureing image validity with frame per second, maximized pressure Contracting data communication channel accounting, at ground receiver end, the recovery of high-definition image is completed using the mechanism that decodes, is transmitted in view data During process, different control signals is transmitted using different data transmission links.
7. based on the method for work of the system as any one of claim 1-6, it is characterized in that:Comprise the following steps:
(1) kinematics information of moving target is collected using virtual reality device, completes the motion capture of moving target;
(2) spatial attitude of the moving target of determination and action are matched with the action model stored, identifies action target The action command sent, and be sent to unmanned plane and control it to complete corresponding act;
(3) unmanned plane performs corresponding action, it is observed that visual line of sight in gather scene image, image is taken out Frame, quantization and decoding process;
(4) synthesis of high pressure shaft tower image is carried out using image mosaic technology, obtains ultrahigh resolution panoramic picture;
(5) Coordinate Conversion is carried out to head or/and gondola, according to the panoramic picture of passback, image object offset be converted into Inertia rotation data, the image shift amount collected is converted into Mechanical course amount and the control of gondola of head by conversion parameter Amount processed, flexibility and the real-time of IMAQ are lifted while keeping unmanned plane during flying mobility to realize.
8. method of work as claimed in claim 7, it is characterized in that:In the step (1), the initial data that will collect, bag Acceleration magnitude and magnitude of angular velocity are included, object space attitude angle, including roll angle, the angle of pitch are calculated using data anastomosing algorithm And/or course angle.
9. method of work as claimed in claim 7, it is characterized in that:In the step (3), image procossing specifically includes:Not On the premise of influenceing visual effect, video is carried out to take out frame processing;The analog signal of image is quantified with 256 pixel values And coding, data-signal processing is completed, and then to compensation deals in the processing of image progress inverse quantization, infra-frame prediction and frame, recover Video data.
10. method of work as claimed in claim 7, it is characterized in that:In the step (4), feature point extraction is carried out to image, Ultrahigh resolution high pressure shaft tower image is read, and sampling diminution is carried out to image;Using bilinear interpolation by superelevation to be spliced Image in different resolution carries out sampling diminution, and all imagery exploitation ORB algorithms after being reduced to sampling carry out feature extraction.
11. method of work as claimed in claim 10, it is characterized in that:In the step (4), entered using the ORB features of extraction Obtained matching double points are screened, obtain thick matching double points by the closest matching of row by RASANC algorithms;Utilize extraction Thick matching double points coordinate, calculates the respective coordinates in original ultrahigh resolution image, and in original ultrahigh resolution image Matching double points where image block in extract ORB features again, accurately matched.
12. method of work as claimed in claim 7, it is characterized in that:In the step (4), gradually go out method to superelevation using being fade-in Resolution ratio adjacent image is merged, and obtains ultrahigh resolution panoramic picture.
13. method of work as claimed in claim 7, it is characterized in that:In the step (5), including:
(1) Coordinate Conversion:Shaft tower, unmanned aerial vehicle platform, pod are in three coordinate systems, and shaft tower and unmanned aerial vehicle platform are entered Row combines is combined conversion with conversion, unmanned aerial vehicle platform and gondola, so as to three's directly progressive association conversion;
(2) amount of spin is determined:According to the tour video information of the existing real-time display of virtual reality device, transmission line of electricity is accurately found Part and defective locations, navigate to the position of part and defect, and start virtual unit focusing function, obtain virtual unit in real time Inertia information, be mapped to by Coordinate Conversion in gondola coordinate system, obtain the Mechanical course amount of gondola;
(3) communication code of gondola Mechanical course amount is carried out based on unmanned aerial vehicle station encoder, controlled quentity controlled variable is inclined by angle, position Shifting amount information, horizontal, the vertical and angular velocity information of gondola are converted into, controls link to be uploaded to gondola by data, driving is hung The visible ray collecting device of cabin and carrying completes IMAQ.
14. method of work as claimed in claim 7, it is characterized in that:Unmanned aerial vehicle (UAV) control platform configuration includes:
(1) open video path and set parameter and video window size, then initiating communications parameter, it is real to obtain unmanned plane When video resource and show.
(2) by observing the dynamic of real-time video, according to detection pattern and user experience, judge whether to receive current position;
(3) analyze unmanned plane inspection during defect location technical factor, external factor and accidentalia, identify each business Flow nodes may influence the factor of defect location accuracy, formulate countermeasure;
(4) by the thinking habit of user, judge the position of head adjustment and unmanned plane gondola is adjusted by video angle Position;
(5) according to user action to the control of head travel direction and function control, meanwhile, also the gondola of unmanned plane is controlled in real time System, realize the accurate focusing to electric power line pole tower and intelligent collection.
CN201710897035.3A 2017-09-28 2017-09-28 A kind of virtual interaction system and its method of work based on inertia capturing technology Pending CN107703956A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710897035.3A CN107703956A (en) 2017-09-28 2017-09-28 A kind of virtual interaction system and its method of work based on inertia capturing technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710897035.3A CN107703956A (en) 2017-09-28 2017-09-28 A kind of virtual interaction system and its method of work based on inertia capturing technology

Publications (1)

Publication Number Publication Date
CN107703956A true CN107703956A (en) 2018-02-16

Family

ID=61175056

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710897035.3A Pending CN107703956A (en) 2017-09-28 2017-09-28 A kind of virtual interaction system and its method of work based on inertia capturing technology

Country Status (1)

Country Link
CN (1) CN107703956A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109946564A (en) * 2019-03-15 2019-06-28 山东鲁能智能技术有限公司 A kind of distribution network overhead line inspection data collection method and cruising inspection system
CN111207741A (en) * 2020-01-16 2020-05-29 西安因诺航空科技有限公司 Unmanned aerial vehicle navigation positioning method based on indoor vision vicon system
CN111708916A (en) * 2020-06-21 2020-09-25 深圳天海宸光科技有限公司 Unmanned aerial vehicle cluster video intelligent processing system and method
CN115468560A (en) * 2022-11-03 2022-12-13 国网浙江省电力有限公司宁波供电公司 Quality inspection method, robot, device and medium based on multi-sensor information fusion
CN117389338A (en) * 2023-12-12 2024-01-12 天津云圣智能科技有限责任公司 Multi-view interaction method and device of unmanned aerial vehicle and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
CN104615147A (en) * 2015-02-13 2015-05-13 中国北方车辆研究所 Method and system for accurately positioning polling target of transformer substation
CN105222761A (en) * 2015-10-29 2016-01-06 哈尔滨工业大学 The first person immersion unmanned plane control loop realized by virtual reality and binocular vision technology and drive manner
CN204992418U (en) * 2015-09-22 2016-01-20 南方电网科学研究院有限责任公司 Automatic device of patrolling and examining of unmanned aerial vehicle transmission line defect
US20160035224A1 (en) * 2014-07-31 2016-02-04 SZ DJI Technology Co., Ltd. System and method for enabling virtual sightseeing using unmanned aerial vehicles
CN105551032A (en) * 2015-12-09 2016-05-04 国网山东省电力公司电力科学研究院 Pole image collection system and method based on visual servo
CN106092054A (en) * 2016-05-30 2016-11-09 广东能飞航空科技发展有限公司 A kind of power circuit identification precise positioning air navigation aid

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102855649A (en) * 2012-08-23 2013-01-02 山东电力集团公司电力科学研究院 Method for splicing high-definition image panorama of high-pressure rod tower on basis of ORB (Object Request Broker) feature point
US20160035224A1 (en) * 2014-07-31 2016-02-04 SZ DJI Technology Co., Ltd. System and method for enabling virtual sightseeing using unmanned aerial vehicles
CN104615147A (en) * 2015-02-13 2015-05-13 中国北方车辆研究所 Method and system for accurately positioning polling target of transformer substation
CN204992418U (en) * 2015-09-22 2016-01-20 南方电网科学研究院有限责任公司 Automatic device of patrolling and examining of unmanned aerial vehicle transmission line defect
CN105222761A (en) * 2015-10-29 2016-01-06 哈尔滨工业大学 The first person immersion unmanned plane control loop realized by virtual reality and binocular vision technology and drive manner
CN105551032A (en) * 2015-12-09 2016-05-04 国网山东省电力公司电力科学研究院 Pole image collection system and method based on visual servo
CN106092054A (en) * 2016-05-30 2016-11-09 广东能飞航空科技发展有限公司 A kind of power circuit identification precise positioning air navigation aid

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109946564A (en) * 2019-03-15 2019-06-28 山东鲁能智能技术有限公司 A kind of distribution network overhead line inspection data collection method and cruising inspection system
CN109946564B (en) * 2019-03-15 2021-07-27 国网智能科技股份有限公司 Distribution network overhead line inspection data acquisition method and inspection system
CN111207741A (en) * 2020-01-16 2020-05-29 西安因诺航空科技有限公司 Unmanned aerial vehicle navigation positioning method based on indoor vision vicon system
CN111708916A (en) * 2020-06-21 2020-09-25 深圳天海宸光科技有限公司 Unmanned aerial vehicle cluster video intelligent processing system and method
CN115468560A (en) * 2022-11-03 2022-12-13 国网浙江省电力有限公司宁波供电公司 Quality inspection method, robot, device and medium based on multi-sensor information fusion
CN117389338A (en) * 2023-12-12 2024-01-12 天津云圣智能科技有限责任公司 Multi-view interaction method and device of unmanned aerial vehicle and storage medium
CN117389338B (en) * 2023-12-12 2024-03-08 天津云圣智能科技有限责任公司 Multi-view interaction method and device of unmanned aerial vehicle and storage medium

Similar Documents

Publication Publication Date Title
CN107703956A (en) A kind of virtual interaction system and its method of work based on inertia capturing technology
CN108139799B (en) System and method for processing image data based on a region of interest (ROI) of a user
CN106485736A (en) A kind of unmanned plane panoramic vision tracking, unmanned plane and control terminal
US20180190014A1 (en) Collaborative multi sensor system for site exploitation
CN106657923B (en) Scene switching type shared viewing system based on position
CN107071389A (en) Take photo by plane method, device and unmanned plane
KR20150021526A (en) Self learning face recognition using depth based tracking for database generation and update
CN106020234B (en) Unmanned aerial vehicle flight control method, device and equipment
CN103024350A (en) Master-slave tracking method for binocular PTZ (Pan-Tilt-Zoom) visual system and system applying same
CN108650522B (en) Live broadcast system capable of instantly obtaining high-definition photos based on automatic control
CN109164829A (en) A kind of flight mechanical arm system and control method based on device for force feedback and VR perception
CN112419233B (en) Data annotation method, device, equipment and computer readable storage medium
CN106060523B (en) Panoramic stereo image acquisition, display methods and corresponding device
CN103543827A (en) Immersive outdoor activity interactive platform implement method based on single camera
CN106657792B (en) Shared viewing device
CN103716399A (en) Remote interaction fruit picking cooperative asynchronous control system and method based on wireless network
CN108259787B (en) Panoramic video switching device and method
CN112815923A (en) Visual positioning method and device
CN111208842A (en) Virtual unmanned aerial vehicle and entity unmanned aerial vehicle mixed cluster task control system
CN113746936B (en) VR and AR distributed cooperation fully-mechanized coal mining face intelligent monitoring system
CN112669469B (en) Power plant virtual roaming system and method based on unmanned aerial vehicle and panoramic camera
CN114281100A (en) Non-hovering unmanned aerial vehicle inspection system and method thereof
CN110009595A (en) A kind of image processing method, device, picture processing chip and aircraft
CN109709975A (en) A kind of quadrotor indoor security system and method for view-based access control model SLAM
CN112558761A (en) Remote virtual reality interaction system and method for mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 250101 block B, Yinhe building, 2008 Xinjie street, hi tech Zone, Ji'nan, Shandong.

Applicant after: Shandong Luneng Intelligent Technology Co., Ltd.

Applicant after: Electric Power Research Institute of State Grid Shandong Electric Power Company

Applicant after: State Grid Corporation of China

Address before: 250101 B block 626, Yinhe building, 2008 Xinjie street, Ji'nan high tech Zone, Shandong.

Applicant before: Shandong Luneng Intelligent Technology Co., Ltd.

Applicant before: Electric Power Research Institute of State Grid Shandong Electric Power Company

Applicant before: State Grid Corporation of China

CB02 Change of applicant information
CB02 Change of applicant information

Address after: 250101 Electric Power Intelligent Robot Production Project 101 in Jinan City, Shandong Province, South of Feiyue Avenue and East of No. 26 Road (ICT Industrial Park)

Applicant after: National Network Intelligent Technology Co., Ltd.

Applicant after: Electric Power Research Institute of State Grid Shandong Electric Power Company

Applicant after: State Grid Co., Ltd.

Address before: 250101 block B, Yinhe building, 2008 Xinjie street, hi tech Zone, Ji'nan, Shandong.

Applicant before: Shandong Luneng Intelligent Technology Co., Ltd.

Applicant before: Electric Power Research Institute of State Grid Shandong Electric Power Company

Applicant before: State Grid Corporation

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210225

Address after: Room 902, 9 / F, block B, Yinhe building, 2008 Xinluo street, high tech Zone, Jinan City, Shandong Province, 250101

Applicant after: Shandong Luneng Software Technology Co.,Ltd. intelligent electrical branch

Applicant after: ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER Co.

Applicant after: STATE GRID CORPORATION OF CHINA

Address before: 250101 power intelligent robot production project 101 south of Feiyue Avenue and east of No.26 Road (in ICT Industrial Park) in Suncun District of Gaoxin, Jinan City, Shandong Province

Applicant before: National Network Intelligent Technology Co.,Ltd.

Applicant before: ELECTRIC POWER RESEARCH INSTITUTE OF STATE GRID SHANDONG ELECTRIC POWER Co.

Applicant before: STATE GRID CORPORATION OF CHINA

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180216