CN116866719A - Intelligent analysis processing method for high-definition video content based on image recognition - Google Patents

Intelligent analysis processing method for high-definition video content based on image recognition Download PDF

Info

Publication number
CN116866719A
CN116866719A CN202310853491.3A CN202310853491A CN116866719A CN 116866719 A CN116866719 A CN 116866719A CN 202310853491 A CN202310853491 A CN 202310853491A CN 116866719 A CN116866719 A CN 116866719A
Authority
CN
China
Prior art keywords
target
tracked
contour
aerial vehicle
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310853491.3A
Other languages
Chinese (zh)
Other versions
CN116866719B (en
Inventor
胡明征
徐象锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Chengshi Electronic Technology Co ltd
Original Assignee
Shandong Henghui Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Henghui Software Co ltd filed Critical Shandong Henghui Software Co ltd
Priority to CN202310853491.3A priority Critical patent/CN116866719B/en
Publication of CN116866719A publication Critical patent/CN116866719A/en
Application granted granted Critical
Publication of CN116866719B publication Critical patent/CN116866719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/695Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to the technical field of intelligent analysis of high-definition video content, and particularly discloses an intelligent analysis processing method of high-definition video content based on image recognition, which comprises the steps of target input to be tracked, target video acquisition, target confirmation required to be tracked, shooting position acquisition and shooting gesture adaptability adjustment; according to the invention, through the outline of the target required to be tracked, the shooting setting adaptation degree analysis of the target aerial unmanned aerial vehicle in the two states of complete coincidence and partial coincidence is carried out, the data analysis of the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is realized, the accuracy and the adaptation of the shooting setting adaptation degree analysis are improved, and meanwhile, the shooting gesture adaptation adjustment analysis of the target required to be tracked is carried out through the azimuth, the center point offset and the azimuth of the shot image where the target profile is required to be tracked, the aerial speed of the aerial unmanned aerial vehicle and the GOS position coordinate, and the flexibility adjustment of the shooting gesture of the target required to be tracked is realized.

Description

Intelligent analysis processing method for high-definition video content based on image recognition
Technical Field
The invention relates to the technical field of intelligent analysis of high-definition video content, in particular to an intelligent analysis processing method of high-definition video content based on image recognition.
Background
The intelligent analysis of the high-definition video content is to analyze and understand each frame in the video through computer vision and artificial intelligence technology, and the intelligent analysis comprises tasks such as target detection and motion tracking, and an unmanned aerial vehicle aerial photographing mode is adopted when the motion tracking is carried out, so that the high definition and the integrity of the subsequent video are ensured, and unmanned aerial vehicle aerial photographing setting is required to be processed.
The existing aerial unmanned aerial vehicle mainly carries out shooting attitude adjustment by adjusting aerial angles, and obviously, the shooting attitude adjustment mode also has the following problems: 1. the effect level of the aerial video is only considered by adjusting the aerial angle to carry out shooting gesture adjustment, so that the integrity of the aerial video is not guaranteed.
2. The depth analysis is not carried out according to the position duty ratio and the size of the tracking object in the photographed image, so that the reliability and the rationality of the photographing gesture adjustment of the aerial unmanned aerial vehicle are reduced, and the flexibility of the photographing gesture adjustment of the aerial unmanned aerial vehicle is reduced.
3. The aerial flight speed analysis of the data is carried out without combining the shooting conditions of the dynamic tracking targets, so that the high-definition effect of the aerial video of the dynamic object is poor, and meanwhile, the reliability of the aerial unmanned aerial vehicle for adjusting the shooting gesture of the dynamic object is reduced.
Disclosure of Invention
In view of this, in order to solve the problems set forth in the background art, an intelligent analysis processing method for high-definition video content based on image recognition is now provided.
The aim of the invention can be achieved by the following technical scheme: the invention provides an intelligent analysis processing method for high-definition video content based on image recognition, which comprises the following steps: s1, inputting a target to be tracked: dividing a contour image of a target to be tracked into contour areas, extracting RGB values of the contour areas, and further inputting the type of the target to be tracked, the contour image and the RGB values of the contour areas into a management background of the target aerial unmanned aerial vehicle, wherein the type comprises moving and non-moving.
S2, tracking target video acquisition: and starting the target aerial unmanned aerial vehicle according to the input type of the target to be tracked, the contour image and the RGB value of each contour area, carrying out video acquisition on each similar target to be tracked, and positioning video information from each similar target to be tracked.
S3, demand tracking target confirmation: and calculating the tracking similarity of each similar target to be tracked according to the video information of each similar target to be tracked, so as to lock the target to be tracked of the target aerial unmanned aerial vehicle, and recording the target to be tracked as a target to be tracked.
S4, acquiring shooting positions: the method comprises the steps of obtaining a video of a demand tracking target collected by a target aerial unmanned aerial vehicle, locating the outline of the demand tracking target from the video, and extracting aerial setting parameters of the target aerial unmanned aerial vehicle.
S5, shooting attitude adaptability adjustment: according to the target unmanned aerial vehicle demand tracking target's profile, calculate the unmanned aerial vehicle's of target aerial photography shooting and set up the adaptation degree to take photo by plane unmanned aerial vehicle's shooting gesture adaptability adjustment carries out.
Specifically, the video information is RGB values of each contour region of each divided image.
Specifically, the calculating the tracking similarity of each similar target to be tracked includes the following specific calculating process: a1, locating the contour volume of the target to be tracked from the contour image of the target to be tracked, and marking as V To be treated
A2, overlapping and comparing the contour of each similar target to be tracked with the contour of the target to be tracked to obtain the overlapping volume of the contour of each similar target to be tracked and the contour of the target to be tracked, extracting the maximum overlapping volume from the overlapping volume, and marking asWhere i represents a similar object number to be tracked, i=1, 2,..n.
A3, calculating the tracking similarity beta of the profile layer corresponding to each similar target to be tracked iWhere K represents the contour overlap volume ratio of the set reference, and e represents a natural constant.
A4, extracting RGB values of each contour area in each divided image from video information of each similar target to be tracked.
And A5, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked.
A6, calculating the tracking similarity delta of each similar target to be tracked i
Specifically, the calculating the tracking similarity of the corresponding color layers of the similar targets to be tracked includes the following specific calculating process: b1, marking RGB values of each contour region corresponding to the target to be tracked in the target segmentation image corresponding to each similar target to be tracked as R respectively ij 、G ij And B ij Where j represents the contour region number, j=1, 2,..m.
B2, respectively marking RGB values of the target to be tracked corresponding to each contour area asAnd->
B3, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked,
specifically, the calculation formula of the tracking similarity of each similar target to be tracked is as follows:wherein a is 1 And a 2 Respectively representing the set target similarity evaluation duty ratio weights of the outline layer and the color layer 1 Indicating the set tracking similarity assessment correction factor.
Specifically, the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is calculated, and the specific calculation process is as follows: c1, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is completely overlapped, extracting the image area occupation ratio of the contour of the target to be tracked and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asX 1
C2, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under complete superposition 1
Wherein K is Ginseng radix And x Ginseng radix The area ratio and the center point offset of the set reference are respectively represented, and ΔK and Δx represent the area ratio deviation and the center point offset deviation of the set reference, b 1 And b 2 Respectively representing the set area duty ratio deviation under complete coincidence and the set adaptive evaluation duty ratio weight corresponding to the shooting setting of the center point offset deviation, gamma 2 The camera settings under the complete coincidence representing the settings adapt the evaluation correction factors.
C3, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is partially overlapped, extracting the image area occupation ratio of the contour of the target to be tracked, the area of the non-overlapped contour and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asS and x 2
C4, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under partial superposition 2
Wherein S is Ginseng radix Representing the area of non-overlapping contours of the set reference, b 3 、b 4 And b 5 Respectively representing the set area occupation ratio deviation under partial coincidence, the center point offset deviation and the non-coincident contour area corresponding shooting setting adaptation evaluation occupation ratio weight gamma 3 The shooting setting under the partial coincidence representing the setting is adapted to evaluate the correction factor.
Specifically, the aerial photographing setting parameters include aerial photographing speed and GPS position coordinates.
Specifically, the shooting gesture adaptive adjustment of the aerial unmanned aerial vehicle includes shooting gesture adaptive adjustment when the type of the required tracking target is moving and shooting gesture adaptive adjustment when the type of the required tracking target is non-moving, wherein the specific adjustment process of shooting gesture adaptive adjustment when the type of the required tracking target is moving is as follows: and D1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is completely overlapped with the input contour of the target to be tracked, extracting the area occupation ratio of the contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters.
D2, when the area occupation ratio of the profile image of the target required to be tracked is smaller than a set value, sending a descending instruction to the target aerial unmanned aerial vehicle, and enabling the target aerial unmanned aerial vehicle to be subjected to the descending instructionAs a decrease value, wherein H Ginseng radix The reference movement height value corresponding to the set unit area duty deviation is shown.
And D3, when the area occupation ratio of the profile image of the target required to be tracked is larger than a set value, sending a rising instruction to the target aerial unmanned aerial vehicle, and taking H as a rising value.
D4, when the position of the center point of the profile of the target to be tracked is positioned at the left side of the position of the center point of the shot image, a left shift instruction is sent to the target aerial unmanned aerial vehicle, and x is set 1 As a left shift distance value, when the position of the center point of the profile of the target to be tracked is positioned on the right side of the position of the center point of the photographed image, a right shift instruction is sent to the target aerial unmanned aerial vehicle, and x is calculated 1 As a right shift distance value.
And D5, when the contour of the target to be tracked and the input contour of the target to be tracked are partially overlapped, extracting the relative azimuth of the current contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate and the aerial photographing speed from the aerial photographing setting parameters.
D6, when the profile image of the target to be tracked is above the shot image, extracting the offset distance of the center point of the profile of the target to be tracked, and marking the offset distance as x Upper part Sending an acceleration instruction to a target aerial unmanned aerial vehicle, and enabling v=x to be Upper part *v Ginseng radix As acceleration value, v Ginseng radix The reference corresponding to the set unit distance deviation adjusts the shooting speed.
And D7, when the profile image of the target to be tracked is positioned below the shot image, extracting the offset distance of the central point of the profile of the target to be tracked, and marking the offset distance as x Lower part(s) Transmitting a deceleration instruction to a target aerial unmanned aerial vehicle, and transmitting v=x Lower part(s) *v Ginseng radix As a deceleration value.
And D8, when the positions of the center points of the required tracking target contour are positioned at the left side and the right side of the positions of the center points of the photographed images, obtaining the required tracking target contour according to the mode of adaptively adjusting the photographing postures of the positions of the center points of the required tracking target contour at the left side and the right side of the positions of the center points of the photographed images under complete coincidence.
And D9, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is larger than a set reference value, maintaining the current GPS position coordinate and the aerial speed.
Specifically, the specific adjustment process for adaptively adjusting the shooting gesture when the type of the target to be tracked is non-moving is as follows: and E1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the profile of the target to be tracked is completely overlapped with the input profile of the target to be tracked, obtaining the target aerial unmanned aerial vehicle in the same way according to the mode of shooting gesture adaptation adjustment of the target aerial unmanned aerial vehicle under the complete overlapping when the type of the target to be tracked is moving.
And E2, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is partially overlapped with the input contour of the target to be tracked, extracting the relative azimuth of the current contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters.
And E3, when the profile image of the target required to be tracked is above the shot image, extracting the offset distance of the central point of the profile of the target required to be tracked, recording the offset distance as x, sending a forward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a forward moving distance value.
And E4, when the profile image of the target required to be tracked is positioned below the shot image, extracting the offset distance of the central point of the profile of the target required to be tracked, marking the offset distance as x, sending a backward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a backward moving distance value.
And E5, when the positions of the center points of the required tracking target contour are positioned at the left side and the right side of the positions of the center points of the photographed images, obtaining the required tracking target contour by the same way according to the mode of adaptively adjusting the photographing postures of the positions of the center points of the required tracking target contour at the left side and the right side of the positions of the center points of the photographed images under complete coincidence.
Compared with the prior art, the embodiment of the invention has at least the following advantages or beneficial effects: (1) According to the method, the depth analysis of the tracking similarity of each similar target to be tracked is carried out through the profile layer and the color layer, so that the multidimensional analysis of the tracking similarity of each similar target to be tracked is realized, the error in the confirmation result of the tracked target is reduced, and the accuracy of the confirmation of the tracked target is improved.
(2) According to the invention, through the requirement of tracking the outline of the target, the shooting setting adaptation degree analysis of the target aerial unmanned aerial vehicle in the two states of complete coincidence and partial coincidence is carried out, the data analysis of the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is realized, the shooting setting adaptation state of the target aerial unmanned aerial vehicle is intuitively displayed, the accuracy and the adaptation of the shooting setting adaptation degree analysis are improved, and a reliable data support basis is provided for the follow-up shooting posture adaptation adjustment of the aerial unmanned aerial vehicle.
(3) According to the invention, the shooting gesture adaptability adjustment of the moving and non-moving requirement tracking target is performed by combining the requirement tracking target outline and different positions between the two shooting images, so that the integrity and high definition of the shooting images are improved, and the coverage and reliability of the shooting gesture adaptability adjustment of the aerial unmanned aerial vehicle are improved.
(4) According to the invention, the shooting gesture adaptability adjustment analysis of the moving demand tracking target is carried out through the azimuth of the demand tracking target profile in the shot image, the offset distance and the azimuth of the central point, the aerial shooting speed of the aerial unmanned aerial vehicle and the GOS position coordinate, so that the flexibility adjustment of the shooting gesture of the moving demand tracking target is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the steps of the method of the present invention.
FIG. 2 is a schematic diagram of the area ratio of the present invention.
FIG. 3 is a schematic view of the left and right directions of the center point of the present invention.
FIG. 4 is a schematic view of the center point of the present invention in a vertical direction.
Description of the drawings: 1. shooting an image, 2, requiring tracking of a target contour image, 3, shooting an image center point, 4, requiring tracking of a target contour center point, 5, and requiring tracking of a target contour center point offset distance.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the invention provides a high-definition video content intelligent analysis processing method based on image recognition, which comprises the following steps: s1, inputting a target to be tracked: dividing a contour image of a target to be tracked into contour areas, extracting RGB values of the contour areas, and further inputting the type of the target to be tracked, the contour image and the RGB values of the contour areas into a management background of the target aerial unmanned aerial vehicle, wherein the type comprises moving and non-moving.
S2, tracking target video acquisition: and starting the target aerial unmanned aerial vehicle according to the input type of the target to be tracked, the contour image and the RGB value of each contour area, carrying out video acquisition on each similar target to be tracked, and positioning video information from each similar target to be tracked.
It should be noted that, the method for confirming each similar target to be tracked is that an object similar to the outline of the target to be tracked is extracted from a cloud database, is used as each similar target to be tracked, and is input into the management background of the target aerial unmanned aerial vehicle.
In a specific embodiment of the present invention, the video information is RGB values of each contour area of each divided image.
It should be noted that, each divided image is obtained by traversing each frame of the video and storing each frame as an independent image file, and each contour area is obtained in the same way according to each contour area dividing mode of the object to be tracked.
S3, demand tracking target confirmation: and calculating the tracking similarity of each similar target to be tracked according to the video information of each similar target to be tracked, so as to lock the target to be tracked of the target aerial unmanned aerial vehicle, and recording the target to be tracked as a target to be tracked.
In a specific embodiment of the present invention, the calculating the tracking similarity of each similar target to be tracked specifically includes: a1, locating the contour volume of the target to be tracked from the contour image of the target to be tracked, and marking as V To be treated
A2, overlapping and comparing the contour of each similar target to be tracked with the contour of the target to be tracked to obtain the overlapping volume of the contour of each similar target to be tracked and the contour of the target to be tracked, extracting the maximum overlapping volume from the overlapping volume, and marking asWhere i represents a similar object number to be tracked, i=1, 2,..n.
A3, calculating each phaseTracking similarity beta of profile layer corresponding to target to be tracked iWhere K represents the contour overlap volume ratio of the set reference, and e represents a natural constant.
A4, extracting RGB values of each contour area in each divided image from video information of each similar target to be tracked.
And A5, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked.
In a specific embodiment of the present invention, the specific calculation process includes B1, marking RGB values of each contour region corresponding to the target to be tracked in the target segmentation image corresponding to each similar target to be tracked as R ij 、G ij And B ij Where j represents the contour region number, j=1, 2,..m.
It should be noted that, the target segmented image is a segmented image corresponding to the maximum overlapping volume of each similar target to be tracked.
B2, respectively marking RGB values of the target to be tracked corresponding to each contour area asAnd->
B3, calculating the tracking similarity χ of the corresponding color layers of the similar targets to be tracked i ,
A6, calculating the tracking similarity delta of each similar target to be tracked i
In a specific embodiment of the present invention, a calculation formula of the tracking similarity of each similar target to be tracked is:wherein a is 1 And a 2 Respectively representing the set target similarity evaluation duty ratio weights of the outline layer and the color layer 1 Indicating the set tracking similarity assessment correction factor.
The method for locking the target to be tracked of the target aerial unmanned aerial vehicle is to extract the maximum tracking similarity from the tracking similarity of each similar target to be tracked, and take the target to be tracked corresponding to the maximum tracking similarity as the target to be tracked of the target aerial unmanned aerial vehicle.
According to the embodiment of the invention, the depth analysis of the tracking similarity of each similar target to be tracked is carried out through the profile layer and the color layer, so that the multidimensional analysis of the tracking similarity of each similar target to be tracked is realized, the error in the confirmation result of the tracked target is reduced, and the accuracy of the confirmation of the tracked target is improved.
S4, acquiring shooting positions: acquiring a video of a target to be tracked, which is acquired by a target aerial unmanned aerial vehicle, locating the outline of the target to be tracked from the video, and extracting aerial setting parameters of the target aerial unmanned aerial vehicle;
in a specific embodiment of the present invention, the aerial photographing setting parameters include an aerial photographing speed and a GPS position coordinate.
S5, shooting attitude adaptability adjustment: according to the target unmanned aerial vehicle demand tracking target's profile, calculate the unmanned aerial vehicle's of target aerial photography shooting and set up the adaptation degree to take photo by plane unmanned aerial vehicle's shooting gesture adaptability adjustment carries out.
In a specific embodiment of the present invention, the calculating the shooting setting adaptation degree of the target aerial unmanned aerial vehicle specifically includes: c1, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is completely overlapped, extracting the image area occupation ratio of the contour of the target to be tracked and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asX 1
It should be noted that the offset refers to a distance between a position of a center point of the profile of the target to be tracked and a position of a center point of the captured image.
C2, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under complete superposition 1
Wherein K is Ginseng radix And x Ginseng radix The area ratio and the center point offset of the set reference are respectively represented, and ΔK and Δx represent the area ratio deviation and the center point offset deviation of the set reference, b 1 And b 2 Respectively representing the set area duty ratio deviation under complete coincidence and the set adaptive evaluation duty ratio weight corresponding to the shooting setting of the center point offset deviation, gamma 2 The camera settings under the complete coincidence representing the settings adapt the evaluation correction factors.
C3, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is partially overlapped, extracting the image area occupation ratio of the contour of the target to be tracked, the area of the non-overlapped contour and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asS and x 2
C4, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under partial superposition 2
Wherein S is Ginseng radix Representing the area of non-overlapping contours of the set reference, b 3 、b 4 And b 5 Respectively representing the set area occupation ratio deviation under partial coincidence, the center point offset deviation and the non-coincident contour area corresponding shooting setting adaptation evaluation occupation ratio weight gamma 3 The shooting setting under the partial coincidence representing the setting is adapted to evaluate the correction factor.
According to the embodiment of the invention, through the requirement of tracking the outline of the target, the shooting setting adaptation degree analysis of the target aerial unmanned aerial vehicle in the two states of complete coincidence and partial coincidence is carried out, the data analysis of the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is realized, the shooting setting adaptation state of the target aerial unmanned aerial vehicle is intuitively displayed, the accuracy and adaptation of the shooting setting adaptation degree analysis are improved, and a reliable data support basis is provided for the follow-up shooting posture adaptation adjustment of the aerial unmanned aerial vehicle.
It should be noted that in a specific embodiment, the moving target may be a traveling vehicle or a moving animal, and when the aerial unmanned aerial vehicle is not relatively stationary with the moving target, the photographed image may be incomplete, so that the photographing posture of the aerial unmanned aerial vehicle needs to be adaptively adjusted.
In a specific embodiment of the present invention, the adaptive adjustment of the shooting gesture of the aerial photographing unmanned aerial vehicle includes performing adaptive adjustment of the shooting gesture when the type of the target to be tracked is moving and performing adaptive adjustment of the shooting gesture when the type of the target to be tracked is non-moving, where a specific adjustment process of performing adaptive adjustment of the shooting gesture when the type of the target to be tracked is moving is: referring to fig. 2 to 4, D1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than the set reference value and the profile of the target to be tracked and the input profile of the target to be tracked are completely overlapped, the area occupation ratio of the profile image 2 of the target to be tracked and the offset distance 5 and the relative azimuth of the center point of the profile of the target to be tracked are extracted, and the current GPS position coordinates are extracted from the aerial setting parameters.
D2, when the occupation ratio of the area 2 of the profile image of the target required to be tracked is smaller than a set value, sending a descending instruction to the target aerial unmanned aerial vehicle, and enabling the descending instruction to be carried outAs a decrease value, wherein H Ginseng radix The reference movement height value corresponding to the set unit area duty deviation is shown.
And D3, when the area 2 ratio of the required tracking target outline image is larger than a set value, sending a rising instruction to the target aerial unmanned aerial vehicle, and taking H as a rising value.
The area ratio of the profile image of the target to be tracked determines the integrity and definition of the captured image, and thus the area ratio of the profile image of the target to be tracked needs to be analyzed.
D4, when the center point position 4 of the target profile needing to be tracked is positioned at the left side of the center point position 3 of the shot image, a left shift instruction is sent to the target aerial unmanned aerial vehicle, and x is set 1 As a left shift distance value, when the position 4 of the center point of the profile of the target to be tracked is positioned on the right side of the position 3 of the center point of the photographed image, a right shift instruction is sent to the target aerial unmanned aerial vehicle, and x is calculated 1 As a right shift distance value.
And D5, when the contour of the target to be tracked and the input contour of the target to be tracked are partially overlapped, extracting the relative azimuth of the current target contour image 2 to be tracked, the offset distance 5 of the center point of the target contour to be tracked and the relative azimuth, and extracting the current GPS position coordinate and the aerial photographing speed from the aerial photographing setting parameters.
D6, when the profile image 2 of the target to be tracked is above the shot image 1, extracting the offset 5 of the center point of the profile of the target to be tracked, and marking as x Upper part Sending an acceleration instruction to a target aerial unmanned aerial vehicle, and enabling v=x to be Upper part *v Ginseng radix As acceleration value, v Ginseng radix The reference corresponding to the set unit distance deviation adjusts the shooting speed.
D7, when the profile image 2 of the target to be tracked is below the shot image 1, extracting the offset 5 of the center point of the profile of the target to be tracked, and marking as x Lower part(s) Transmitting a deceleration instruction to a target aerial unmanned aerial vehicle, and transmitting v=x Lower part(s) *v Ginseng radix As a deceleration value.
The unmanned aerial vehicle is characterized in that the head direction is the upper direction, the tail direction is the lower direction, the left wing is the left direction, and the right wing is the right direction in the navigation of the unmanned aerial vehicle.
And D8, when the required tracking target contour center point position 4 is positioned on the left side and the right side of the photographed image center point position 3, obtaining the required tracking target contour center point position according to the mode of adaptively adjusting the photographing postures of the required tracking target contour center point position 4 positioned on the left side and the right side of the photographed image center point position 3 under complete coincidence.
And D9, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is larger than a set reference value, maintaining the current GPS position coordinate and the aerial speed.
According to the embodiment of the invention, the shooting gesture adaptability adjustment analysis of the moving demand tracking target is carried out through the azimuth of the demand tracking target profile in the shooting image, the offset distance and the azimuth of the central point, the aerial shooting speed of the aerial unmanned aerial vehicle and the GOS position coordinate, so that the flexibility adjustment of the shooting gesture of the moving demand tracking target is realized.
In a specific embodiment of the present invention, the specific adjustment process for adaptively adjusting the shooting gesture when the type of the target to be tracked is non-moving is: and E1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the profile of the target to be tracked is completely overlapped with the input profile of the target to be tracked, obtaining the target aerial unmanned aerial vehicle in the same way according to the mode of shooting gesture adaptation adjustment of the target aerial unmanned aerial vehicle under the complete overlapping when the type of the target to be tracked is moving.
And E2, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is partially overlapped with the input contour of the target to be tracked, extracting the relative position of the current target contour image 2 to be tracked and the offset distance 5 and the relative position of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters.
And E3, when the required tracking target profile image 2 is above the photographed image 1, extracting a required tracking target profile center point offset distance 5, marking as x front, sending a forward moving instruction to the target aerial unmanned aerial vehicle, and taking x front as a forward moving distance value.
And E4, when the required tracking target contour image 2 is positioned below the photographed image 1, extracting a required tracking target contour center point offset distance 5, marking as x, sending a backward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a backward moving distance value.
And E5, when the required tracking target contour center point position 3 is positioned at the left side and the right side of the photographed image center point position 4, obtaining the required tracking target contour center point position according to the mode of adaptively adjusting the photographing postures of the required tracking target contour center point position 3 positioned at the left side and the right side of the photographed image center point position 4 under complete coincidence.
According to the embodiment of the invention, the shooting gesture adaptability adjustment of the moving and non-moving target tracking requirements is performed by combining the target contour required to be tracked and different positions between the shooting images, so that the integrity and high definition of the shooting images are improved, and the coverage and reliability of the shooting gesture adaptability adjustment of the aerial unmanned aerial vehicle are improved.
The foregoing is merely illustrative and explanatory of the principles of this invention, as various modifications and additions may be made to the specific embodiments described, or similar arrangements may be substituted by those skilled in the art, without departing from the principles of this invention or beyond the scope of this invention as defined in the claims.

Claims (9)

1. The intelligent analysis processing method for the high-definition video content based on the image recognition is characterized by comprising the following steps of:
s1, inputting a target to be tracked: dividing a contour image of a target to be tracked into contour areas, extracting RGB values of the contour areas, and inputting the type of the target to be tracked, the contour image and the RGB values of the contour areas into a management background of the target aerial unmanned aerial vehicle, wherein the type comprises moving and non-moving;
s2, tracking target video acquisition: starting a target aerial unmanned aerial vehicle according to the input type of the target to be tracked, the contour image and the RGB value of each contour area, carrying out video acquisition on each similar target to be tracked, and positioning video information from each similar target to be tracked;
s3, demand tracking target confirmation: according to the video information of each similar target to be tracked, calculating the tracking similarity of each similar target to be tracked, thereby locking the target to be tracked of the target aerial unmanned aerial vehicle, and marking the target as a target to be tracked;
s4, acquiring shooting positions: acquiring a video of a target to be tracked, which is acquired by a target aerial unmanned aerial vehicle, locating the outline of the target to be tracked from the video, and extracting aerial setting parameters of the target aerial unmanned aerial vehicle;
s5, shooting attitude adaptability adjustment: according to the target unmanned aerial vehicle demand tracking target's profile, calculate the unmanned aerial vehicle's of target aerial photography shooting and set up the adaptation degree to take photo by plane unmanned aerial vehicle's shooting gesture adaptability adjustment carries out.
2. The intelligent analysis processing method for high-definition video content based on image recognition according to claim 1, wherein the intelligent analysis processing method is characterized by comprising the following steps: the video information is RGB values of each contour region in each divided image.
3. The intelligent analysis processing method for high-definition video content based on image recognition according to claim 2, wherein the intelligent analysis processing method is characterized by comprising the following steps: the tracking similarity of each similar target to be tracked is calculated, and the specific calculation process is as follows:
a1, locating the contour volume of the target to be tracked from the contour image of the target to be tracked, and marking as V To be treated
A2, overlapping and comparing the contour of each similar target to be tracked with the contour of the target to be tracked to obtain the overlapping volume of the contour of each similar target to be tracked and the contour of the target to be tracked, extracting the maximum overlapping volume from the overlapping volume, and marking asWhere i represents a similar target number to be tracked, i=1, 2,..n;
a3, calculating the tracking similarity beta of the profile layer corresponding to each similar target to be tracked iWherein K represents the contour overlapping volume ratio of the set reference, and e represents a natural constant;
a4, extracting RGB values of each contour area in each divided image from video information of each similar target to be tracked;
a5, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked;
a6, calculating the tracking similarity delta of each similar target to be tracked i
4. The intelligent analysis processing method for high-definition video content based on image recognition according to claim 3, wherein the intelligent analysis processing method is characterized by comprising the following steps of: the tracking similarity of the corresponding color layers of the similar targets to be tracked is calculated, and the specific calculation process is as follows:
b1, marking RGB values of each contour region corresponding to the target to be tracked in the target segmentation image corresponding to each similar target to be tracked as R respectively ij 、G ij And B ij Where j represents the contour region number, j=1, 2, m;
b2, respectively marking RGB values of the target to be tracked corresponding to each contour area asAnd->
B3, calculating the tracking similarity χi of the corresponding color layers of the similar targets to be tracked,
5. the intelligent analysis processing method for high-definition video content based on image recognition according to claim 3, wherein the intelligent analysis processing method is characterized by comprising the following steps of: the calculation formula of the tracking similarity of each similar target to be tracked is as follows:wherein a is 1 And a 2 Respectively representing the set target similarity evaluation duty ratio weights of the outline layer and the color layer 1 Indicating the set tracking similarity assessment correction factor.
6. The intelligent analysis processing method for high-definition video content based on image recognition according to claim 3, wherein the intelligent analysis processing method is characterized by comprising the following steps of: the shooting setting adaptation degree of the calculation target aerial unmanned aerial vehicle comprises the following specific calculation processes:
c1, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is completely overlapped, extracting the image area occupation ratio of the contour of the target to be tracked and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asX 1
C2, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under complete superposition 1
Wherein K is Ginseng radix And x Ginseng radix The area ratio and the center point offset of the set reference are respectively represented, and ΔK and Δx represent the area ratio deviation and the center point offset deviation of the set reference, b 1 And b 2 Respectively representing the set area duty ratio deviation under complete coincidence and the set adaptive evaluation duty ratio weight corresponding to the shooting setting of the center point offset deviation, gamma 2 Representing the set shooting setting adaptation evaluation correction factor under complete coincidence;
c3, comparing the contour of the target to be tracked with the input contour of the target to be tracked, if the contour is partially overlapped, extracting the image area occupation ratio of the contour of the target to be tracked, the area of the non-overlapped contour and the offset distance of the center point of the contour of the target to be tracked, and respectively marking asS and x 2
C4, calculating shooting setting adaptation degree theta of target aerial unmanned aerial vehicle under partial superposition 2
Wherein S is Ginseng radix Representing the area of non-overlapping contours of the set reference, b 3 、b 4 And b 5 Respectively representing the set area occupation ratio deviation under partial coincidence, the center point offset deviation and the non-coincident contour area corresponding shooting setting adaptation evaluation occupation ratio weight gamma 3 The shooting setting under the partial coincidence representing the setting is adapted to evaluate the correction factor.
7. The intelligent analysis processing method for high-definition video content based on image recognition as claimed in claim 6, wherein the intelligent analysis processing method is characterized by comprising the following steps: the aerial photographing setting parameters comprise aerial photographing speed and GPS position coordinates.
8. The intelligent analysis processing method for high-definition video content based on image recognition as claimed in claim 7, wherein the intelligent analysis processing method is characterized by comprising the following steps: the unmanned aerial vehicle takes photo by plane's shooting gesture adaptability adjustment includes taking photo by plane gesture adaptability adjustment when the type to demand tracking target is for removing and taking photo by plane gesture adaptability adjustment when the type to demand tracking target is for not removing, wherein, the specific adjustment process of taking photo by plane gesture adaptability adjustment when the type to demand tracking target is for removing is:
d1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is completely overlapped with the input contour of the target to be tracked, extracting the area occupation ratio of the contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters;
d2, when the area occupation ratio of the profile image of the target required to be tracked is smaller than a set value, sending a descending instruction to the target aerial unmanned aerial vehicle, and enabling the target aerial unmanned aerial vehicle to be subjected to the descending instructionAs a drop valueWherein H is Ginseng radix A reference movement height value corresponding to the set unit area occupation ratio deviation is represented;
d3, when the area occupation ratio of the profile image of the target required to be tracked is larger than a set value, sending a rising instruction to the target aerial unmanned aerial vehicle, and taking H as a rising value;
d4, when the position of the center point of the profile of the target to be tracked is positioned at the left side of the position of the center point of the shot image, a left shift instruction is sent to the target aerial unmanned aerial vehicle, and x is set 1 As a left shift distance value, when the position of the center point of the profile of the target to be tracked is positioned on the right side of the position of the center point of the photographed image, a right shift instruction is sent to the target aerial unmanned aerial vehicle, and x is calculated 1 As a right shift distance value;
when the contour of the target to be tracked is partially overlapped with the input contour of the target to be tracked, extracting the relative position of the contour image of the current target to be tracked and the offset distance and the relative position of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate and the aerial photographing speed from the aerial photographing setting parameters;
d6, when the profile image of the target to be tracked is above the shot image, extracting the offset distance of the center point of the profile of the target to be tracked, and marking the offset distance as x Upper part Sending an acceleration instruction to a target aerial unmanned aerial vehicle, and enabling v=x to be Upper part *v Ginseng radix As acceleration value, v Ginseng radix A reference movement shooting speed corresponding to the set unit distance deviation is shown;
and D7, when the profile image of the target to be tracked is positioned below the shot image, extracting the offset distance of the central point of the profile of the target to be tracked, and marking the offset distance as x Lower part(s) Transmitting a deceleration instruction to a target aerial unmanned aerial vehicle, and transmitting v=x Lower part(s) *v Ginseng radix As a deceleration value;
d8, when the positions of the center points of the required tracking target contour are positioned at the left side and the right side of the positions of the center points of the photographed images, obtaining the positions by the same way according to the mode of adaptively adjusting the photographing postures of the positions of the center points of the required tracking target contour at the left side and the right side of the positions of the center points of the photographed images under complete coincidence;
and D9, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is larger than a set reference value, maintaining the current GPS position coordinate and the aerial speed.
9. The intelligent analysis processing method for high-definition video content based on image recognition according to claim 8, wherein the intelligent analysis processing method is characterized by comprising the following steps: the specific adjusting process for adaptively adjusting the shooting gesture when the type of the target required to be tracked is non-moving is as follows:
e1, when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is completely overlapped with the input contour of the target to be tracked, obtaining the target aerial unmanned aerial vehicle in the same way according to the mode of shooting gesture adaptation adjustment of the target aerial unmanned aerial vehicle under the complete overlapping when the type of the target to be tracked is moving;
when the shooting setting adaptation degree of the target aerial unmanned aerial vehicle is smaller than a set reference value and the contour of the target to be tracked is partially overlapped with the input contour of the target to be tracked, extracting the relative azimuth of the current contour image of the target to be tracked and the offset distance and the relative azimuth of the center point of the contour of the target to be tracked, and extracting the current GPS position coordinate from the aerial setting parameters;
e3, when the profile image of the target to be tracked is above the shot image, extracting the offset of the center point of the profile of the target to be tracked, recording the offset as x front, sending a forward moving instruction to the target aerial unmanned aerial vehicle, and taking the x front as a forward moving distance value;
e4, when the profile image of the target to be tracked is located below the shot image, extracting the offset distance of the center point of the profile of the target to be tracked, marking the offset distance as x, sending a backward moving instruction to the target aerial unmanned aerial vehicle, and taking the x as a backward moving distance value;
and E5, when the positions of the center points of the required tracking target contour are positioned at the left side and the right side of the positions of the center points of the photographed images, obtaining the required tracking target contour by the same way according to the mode of adaptively adjusting the photographing postures of the positions of the center points of the required tracking target contour at the left side and the right side of the positions of the center points of the photographed images under complete coincidence.
CN202310853491.3A 2023-07-12 2023-07-12 Intelligent analysis processing method for high-definition video content based on image recognition Active CN116866719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310853491.3A CN116866719B (en) 2023-07-12 2023-07-12 Intelligent analysis processing method for high-definition video content based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310853491.3A CN116866719B (en) 2023-07-12 2023-07-12 Intelligent analysis processing method for high-definition video content based on image recognition

Publications (2)

Publication Number Publication Date
CN116866719A true CN116866719A (en) 2023-10-10
CN116866719B CN116866719B (en) 2024-02-02

Family

ID=88228230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310853491.3A Active CN116866719B (en) 2023-07-12 2023-07-12 Intelligent analysis processing method for high-definition video content based on image recognition

Country Status (1)

Country Link
CN (1) CN116866719B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252867A (en) * 2023-11-14 2023-12-19 广州市品众电子科技有限公司 VR equipment production product quality monitoring analysis method based on image recognition

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010124399A (en) * 2008-11-21 2010-06-03 Mitsubishi Electric Corp Automatic tracking photographing apparatus from aerial mobile vehicle
CN106650620A (en) * 2016-11-17 2017-05-10 华南理工大学 Target personnel identifying and tracking method applying unmanned aerial vehicle monitoring
CN107284661A (en) * 2016-04-06 2017-10-24 成都积格科技有限公司 Police tracking moving object unmanned plane
CN109885100A (en) * 2017-12-06 2019-06-14 智飞智能装备科技东台有限公司 A kind of unmanned plane target tracking searching system
US20200162682A1 (en) * 2017-07-31 2020-05-21 SZ DJI Technology Co., Ltd. Video processing method, device, aircraft, and system
CN111898438A (en) * 2020-06-29 2020-11-06 北京大学 Multi-target tracking method and system for monitoring scene
CN112927264A (en) * 2021-02-25 2021-06-08 华南理工大学 Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN114820707A (en) * 2022-04-27 2022-07-29 深圳市易智博网络科技有限公司 Calculation method for camera target automatic tracking
CN116168306A (en) * 2022-09-08 2023-05-26 中国电子科技集团公司第二十八研究所 Unmanned aerial vehicle video target tracking method based on region re-search

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010124399A (en) * 2008-11-21 2010-06-03 Mitsubishi Electric Corp Automatic tracking photographing apparatus from aerial mobile vehicle
CN107284661A (en) * 2016-04-06 2017-10-24 成都积格科技有限公司 Police tracking moving object unmanned plane
CN106650620A (en) * 2016-11-17 2017-05-10 华南理工大学 Target personnel identifying and tracking method applying unmanned aerial vehicle monitoring
US20200162682A1 (en) * 2017-07-31 2020-05-21 SZ DJI Technology Co., Ltd. Video processing method, device, aircraft, and system
CN109885100A (en) * 2017-12-06 2019-06-14 智飞智能装备科技东台有限公司 A kind of unmanned plane target tracking searching system
CN111898438A (en) * 2020-06-29 2020-11-06 北京大学 Multi-target tracking method and system for monitoring scene
CN112927264A (en) * 2021-02-25 2021-06-08 华南理工大学 Unmanned aerial vehicle tracking shooting system and RGBD tracking method thereof
CN114820707A (en) * 2022-04-27 2022-07-29 深圳市易智博网络科技有限公司 Calculation method for camera target automatic tracking
CN116168306A (en) * 2022-09-08 2023-05-26 中国电子科技集团公司第二十八研究所 Unmanned aerial vehicle video target tracking method based on region re-search

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117252867A (en) * 2023-11-14 2023-12-19 广州市品众电子科技有限公司 VR equipment production product quality monitoring analysis method based on image recognition

Also Published As

Publication number Publication date
CN116866719B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN107230218B (en) Method and apparatus for generating confidence measures for estimates derived from images captured by vehicle-mounted cameras
CN111311679B (en) Free floating target pose estimation method based on depth camera
CN108711166A (en) A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN107844750A (en) A kind of water surface panoramic picture target detection recognition methods
CN109087261B (en) Face correction method based on unlimited acquisition scene
CN116866719B (en) Intelligent analysis processing method for high-definition video content based on image recognition
WO2019127518A1 (en) Obstacle avoidance method and device and movable platform
CN110443247A (en) A kind of unmanned aerial vehicle moving small target real-time detecting system and method
CN111735445A (en) Monocular vision and IMU (inertial measurement Unit) integrated coal mine tunnel inspection robot system and navigation method
CN113066050B (en) Method for resolving course attitude of airdrop cargo bed based on vision
CN112947526B (en) Unmanned aerial vehicle autonomous landing method and system
CN109871024A (en) A kind of UAV position and orientation estimation method based on lightweight visual odometry
CN114967731A (en) Unmanned aerial vehicle-based automatic field personnel searching method
WO2022152050A1 (en) Object detection method and apparatus, computer device and storage medium
CN114689030A (en) Unmanned aerial vehicle auxiliary positioning method and system based on airborne vision
CN114549549A (en) Dynamic target modeling tracking method based on instance segmentation in dynamic environment
CN112686149A (en) Vision-based autonomous landing method for near-field section of fixed-wing unmanned aerial vehicle
Fucen et al. The object recognition and adaptive threshold selection in the vision system for landing an unmanned aerial vehicle
CN112588621A (en) Agricultural product sorting method and system based on visual servo
CN113781524B (en) Target tracking system and method based on two-dimensional label
CN116665097A (en) Self-adaptive target tracking method combining context awareness
CN115797405A (en) Multi-lens self-adaptive tracking method based on vehicle wheel base
CN112862862B (en) Aircraft autonomous oil receiving device based on artificial intelligence visual tracking and application method
Xin et al. A monocular visual measurement system for UAV probe-and-drogue autonomous aerial refueling
Yuan et al. A method of vision-based state estimation of an unmanned helicopter

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TA01 Transfer of patent application right

Effective date of registration: 20240126

Address after: Room 301, block B, Jinan International Innovation Industrial Park, No. 2 Wanshou Road, Shizhong District, Jinan, Shandong Province, 250000

Applicant after: SHANDONG CHENGSHI ELECTRONIC TECHNOLOGY CO.,LTD.

Country or region after: China

Address before: Room 302, Building B, International Innovation Industrial Park, No. 2 Wanshou Road, Shizhong District, Jinan City, Shandong Province, 250000

Applicant before: Shandong Henghui Software Co.,Ltd.

Country or region before: China

TA01 Transfer of patent application right