CN112906475A - Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle - Google Patents

Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle Download PDF

Info

Publication number
CN112906475A
CN112906475A CN202110070839.2A CN202110070839A CN112906475A CN 112906475 A CN112906475 A CN 112906475A CN 202110070839 A CN202110070839 A CN 202110070839A CN 112906475 A CN112906475 A CN 112906475A
Authority
CN
China
Prior art keywords
image
macro block
compensation
motion vector
initial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110070839.2A
Other languages
Chinese (zh)
Other versions
CN112906475B (en
Inventor
杨慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Gaosun Information Technology Co ltd
Original Assignee
Zhengzhou Kaiwen Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Kaiwen Electronic Technology Co ltd filed Critical Zhengzhou Kaiwen Electronic Technology Co ltd
Priority to CN202110070839.2A priority Critical patent/CN112906475B/en
Publication of CN112906475A publication Critical patent/CN112906475A/en
Application granted granted Critical
Publication of CN112906475B publication Critical patent/CN112906475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/689Motion occurring during a rolling shutter mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Abstract

The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence-based rolling shutter imaging method and system for an unmanned aerial vehicle for urban surveying and mapping. Matching a model frame of the urban building image in the CIM according to the key points of the urban building image acquired in real time; dividing the urban building image and the model frame into N macro block images correspondingly, performing offset compensation by comparing each corresponding macro block image to obtain an initial macro block compensation image, and further performing compensation optimization on the initial macro block compensation image according to the motion vector of each macro block image. By matching the model frame in the CIM model, the matching accuracy can be improved, and the jelly effect can be well eliminated by performing offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.

Description

Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an artificial intelligence-based rolling shutter imaging method and system for an unmanned aerial vehicle for urban surveying and mapping.
Background
The phenomena of shaking, twisting, tilting, and glume shedding, etc., which are generated when a camera using a rolling shutter photographs an object in high-speed motion, or when the camera itself photographs in high-speed motion, are called a jelly effect.
The global shutter can effectively avoid the jelly effect, but the rolling shutter has the advantages of uncomplicated readout, high cost efficiency, less transistor heat, low electronic noise and the like, so the rolling shutter is the most widely used shutter technology at present.
In urban surveying and mapping, the unmanned aerial vehicle is often used to match with the roller shutter camera for high-altitude operation, but during high-altitude operation, due to the balance of the propeller and other reasons, the vibration of the frame and the sensor is inevitable, and further the unmanned aerial vehicle is caused to generate the jelly effect. At present, the shaking of the unmanned aerial vehicle is reduced by configuring the holder for the unmanned aerial vehicle, so that the stability of the picture is increased. Although can avoid the shake effect for unmanned aerial vehicle configuration cloud platform, when the phenomenon such as screw looseness appears in the cloud platform, the cloud platform inevitable becomes the vibration source, can lead to cloud platform vibrations to lead to the jelly effect to produce.
In the prior art, in order to eliminate the jelly effect, an image acquired in real time is generally subjected to blocking processing, and then image compensation optimization is performed according to an offset vector or a motion vector between corresponding blocked images in previous and next frame images.
In practice, the inventors found that the above prior art has the following disadvantages: when a larger jelly effect occurs, offset prediction or motion vector estimation is performed by using the previous frame image as a reference image of the next frame image, and when the selected reference image has the jelly effect, a predicted result has a larger error.
Disclosure of Invention
In order to solve the technical problems, the invention aims to provide an artificial intelligence-based rolling shutter imaging method and system for an unmanned aerial vehicle for urban surveying and mapping, and the adopted technical scheme is as follows:
in a first aspect, an embodiment of the present invention provides an artificial intelligence-based shutter imaging method for an unmanned aerial vehicle for urban mapping, where the method includes:
acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
combining the key points and the depth image to obtain a key point three-dimensional point cloud, and matching the key point three-dimensional point cloud to a corresponding model frame of the urban building image in a CIM (common information model);
correspondingly dividing the urban building image and the model frame into N macro block images, calculating the area intersection ratio of the key point Gaussian hotspots in each corresponding macro block image, and when the area intersection ratio is smaller than an area threshold, performing offset compensation on the macro block images to obtain initial macro block compensation images; otherwise, storing the macro block image in an image buffer area;
when the initial macro block compensation images of the M frames which are nearest to each other can be matched with the corresponding reference macro block images, acquiring the motion vector of the initial macro block compensation images; otherwise, when none of the initial macro block compensation images of the nearest M frames can match the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
Further, when the model frame of a new building image cannot be matched in the CIM, the compensation method for the new building image comprises the following steps:
predicting a forward motion vector of a current frame by using the macro block image of a previous A frame of the current frame in which the new building image appears;
backward predicting a backward motion vector of the current frame by using the macro block image of the frame A behind the current frame;
and performing image compensation on the current frame by combining the forward motion vector and the backward motion vector.
Further, the offset compensation method comprises:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
Further, when all the initial macro block compensation images of the nearest M frames can be matched with the corresponding reference macro block images, the motion vector of the initial macro block compensation image is obtained through a search algorithm of motion estimation.
Further, the method for obtaining the motion vector of the initial macroblock compensation image when none of the initial macroblock compensation images of the nearest M frames can match the corresponding reference macroblock image comprises:
obtaining the motion vector of each adjacent macro block image through the search algorithm of the motion estimation;
and calculating the average motion vector of the adjacent macro block image, and taking the average motion vector as the motion vector of the initial macro block compensation image.
In a second aspect, another embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging system for a city mapping unmanned aerial vehicle, including:
the system comprises a key point detection unit, a data processing unit and a data processing unit, wherein the key point detection unit is used for acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
the image matching unit is used for obtaining a key point three-dimensional point cloud by combining the key point and the depth image, and matching the key point three-dimensional point cloud in a CIM (common information model) to a corresponding model frame of the urban building image;
the offset compensation unit is used for correspondingly dividing the urban building image and the model frame into N macro block images, calculating the area intersection ratio of the key point Gaussian hotspots in each corresponding macro block image, and when the area intersection ratio is smaller than an area threshold value, performing offset compensation on the macro block images to obtain initial macro block compensation images; otherwise, storing the macro block image in an image buffer area;
a motion vector prediction unit, configured to obtain a motion vector of the initial macroblock compensation image when all the initial macroblock compensation images of the nearest M frames can be matched with corresponding reference macroblock images; otherwise, when none of the M nearest frames of the initial macro block compensation images can match the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and the compensation optimization unit is used for correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
Further, when the model frame of a new building image cannot be matched in the CIM model in the image matching unit, the method for compensating the new building image comprises the following steps:
a forward vector obtaining unit, configured to predict a forward motion vector of a current frame by using the macroblock image of a previous a frame of the current frame where the new building image appears;
a backward vector obtaining unit, configured to predict a backward motion vector of the current frame by using the macroblock image of the frame a after the current frame; and the image compensation unit is used for carrying out image compensation on the current frame by combining the forward motion vector and the reverse motion vector.
Further, the method of offset compensation in the offset compensation unit includes:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
Further, the motion vector prediction unit includes a first motion vector detection unit, and the first motion vector detection unit is configured to obtain the motion vector of the initial macroblock compensation image through a search algorithm of motion estimation when all the initial macroblock compensation images of the nearest neighboring M frames can be matched to corresponding reference macroblock images.
Further, the motion vector unit further includes a second motion vector detection unit, the second motion vector detection unit is configured to obtain the motion vector of the initial macroblock compensation image when none of the initial macroblock compensation images of the nearest neighboring M frames can match the corresponding reference macroblock image, and the second motion vector detection unit further includes:
the vector analysis unit is used for obtaining the motion vector of each adjacent macro block image through the search algorithm of the motion estimation;
and the vector processing unit is used for calculating the average motion vector of the adjacent macro block image and taking the average motion vector as the motion vector of the initial macro block compensation image.
The embodiment of the invention has at least the following beneficial effects: (1) the model frame is matched in the CIM model, and because the model frame does not generate the jelly effect, the matching accuracy can be improved by taking the model frame as a reference image, and the jelly effect can be well eliminated by carrying out offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.
(2) When the motion vector of the current macro block image can not be predicted through the relationship between adjacent frames, according to the principle that the correlation between adjacent macro block images belonging to the same moving object is large, the adjacent macro block image adjacent to the current macro block image in the image buffer area is utilized, and the offset of the adjacent macro block image is small, so that the motion vector of the adjacent macro block image is used as the motion vector of the current macro block image, the accuracy of motion vector optimization compensation can be improved, and errors can be reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart of a rolling shutter imaging method of an unmanned aerial vehicle for urban mapping based on artificial intelligence according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating steps of a method for rolling shutter imaging of an unmanned aerial vehicle for urban mapping based on artificial intelligence according to an embodiment of the present invention;
fig. 3 is a block diagram of a rolling shutter imaging system of an artificial intelligence-based urban surveying and mapping unmanned aerial vehicle according to another embodiment of the present invention;
FIG. 4 is a block diagram of an image matching unit according to an embodiment of the present invention;
FIG. 5 is a block diagram of a motion vector prediction unit according to an embodiment of the present invention;
fig. 6 is a block diagram of a second motion vector detection unit according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the rolling shutter imaging method and system of unmanned aerial vehicle for urban surveying and mapping based on artificial intelligence according to the present invention with reference to the accompanying drawings and preferred embodiments shows the following detailed descriptions. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the artificial intelligence-based rolling shutter imaging method and system for the urban surveying and mapping unmanned aerial vehicle in detail by combining with the accompanying drawings.
Referring to the attached drawings 1 and 2, the embodiment of the invention provides an artificial intelligence-based rolling shutter imaging method for an unmanned aerial vehicle for urban surveying and mapping, which comprises the following specific steps:
and S001, acquiring the urban building image and the depth image of the urban building image by using image acquisition equipment, and acquiring the acquired urban building image to obtain key points of the rigid body of the building by using a key point detection network, wherein the key points are each corner point and each anchor frame of the rigid body of the building.
And S002, obtaining a key point three-dimensional point cloud by combining the key points and the depth image, and matching the key point three-dimensional point cloud in the CIM to a corresponding model frame of the urban building image.
Step S003, dividing the city building image and the model frame into N macro block images correspondingly, calculating the area intersection ratio of the key point Gaussian hotspots in each corresponding macro block image, and performing offset compensation on the macro block image to obtain an initial macro block compensation image when the area intersection ratio is smaller than an area threshold value; otherwise, storing the macro block image in the image buffer.
Step S004, when the initial macro block compensation image of the nearest M frames can be matched with the corresponding reference macro block, obtaining the motion vector of the initial macro block compensation image; otherwise, when the initial macro block compensation image of the nearest M frames can not be matched with the corresponding reference macro block, searching the adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image.
In step S005, the compensation optimization is performed on the initial macroblock compensation image using the motion vector.
Further, in step S001, in the embodiment of the present invention, an RGBD camera carried by an unmanned aerial vehicle is used to capture an urban building, and two-dimensional image data of an acquired urban building image is sent to a key point detection network.
Preferably, in the embodiment of the present invention, a key point detection network of an encoder-decoder structure is adopted to perform key point detection on a building rigid body with obvious features in an acquired urban building image, wherein a specific training process of the key point detection network is as follows:
1) the data set is a large number of urban building images collected by the unmanned aerial vehicle and containing typical building features. The city building image can be a city building, and can also be provided with a city road and other partial backgrounds. Wherein 60% of the data set was randomly selected as the training set, 20% as the validation set, and 20% as the test set.
2) The labels used in the data set are keypoint labels, i.e. the positions of objects are marked out using keypoints in the urban building image. In the embodiment of the invention, the key points to be detected are two types, namely each angular point and each anchor frame of the rigid body of the building. The process labeled is as follows: marking the positions of key points on a single channel with the same size as the data image, wherein the corner point is marked as 1, and the anchor frame is marked as 2; then, the Gaussian nucleus is used for processing, so that the key points form the Gaussian hot spots of the key points.
3) The loss function in the key point detection network adopts a mean square error loss function.
It should be noted that the embodiment of the present invention selects the key point for collecting the building rigid body because the building rigid body is a main feature group in the urban building image in the process of urban surveying and mapping by the unmanned aerial vehicle, and the key point for detecting the building rigid body is not only easy to collect but also has strong representativeness.
Further, in step S002, the embodiment of the present invention performs three-dimensional point cloud conversion by combining the detected key points of the building rigid body and the depth image of the city building image collected by the RGBD camera to obtain a key point three-dimensional point cloud.
Furthermore, the position of the unmanned aerial vehicle is positioned in real time by combining the air route planning of the unmanned aerial vehicle and the IMU and GPS information, the obtained position information of the unmanned aerial vehicle is utilized to carry out region division in the CIM, and the model frame corresponding to the urban building image acquired by the unmanned aerial vehicle is quickly matched in the CIM according to the region where the unmanned aerial vehicle is located.
Further, in the embodiment of the present invention, point cloud registration is performed according to the real-time activity area of the unmanned aerial vehicle, where the three-dimensional point cloud of the key point is located in the CIM model, to obtain a model frame corresponding to the city building image, and the specific registration process is as follows:
1) and extracting the key points of the CIM model and the key points of the urban building image according to the same key point selection standard from the three-dimensional point cloud data of the CIM model and the urban building image.
2) And respectively calculating characteristic descriptors of the selected CIM model key points and the key points of the urban building image.
3) And estimating the corresponding relation between the CIM model and the urban building image by combining the coordinate positions of the feature descriptors in the CIM model and the urban building image in two data sets and taking the similarity of the features and the positions between the CIM model and the urban building image as a basis, and preliminarily estimating to obtain corresponding point pairs.
4) If the data set is noisy, the pairs of false correspondences that contribute to the registration are removed.
5) And estimating the rigid body change of the building by using the left correct corresponding relation, solving a corresponding rotation matrix and a translation matrix, and finishing the registration process. It should be noted that (1) the point cloud registration process is divided into two stages, namely, coarse registration and fine registration.
Preferably, in the embodiment of the present invention, the coarse registration uses a registration algorithm based on feature matching, for example, an AO algorithm based on point SHOT features. Preferably, in the embodiment of the present invention, the ICP algorithm is used for the fine registration.
(2) By using the key point matching, the matching accuracy can be improved, and the calculated amount in the point cloud registration process can be effectively reduced. Further, in step S003, the embodiment of the present invention performs offset detection and offset compensation on the model frame and the city building image obtained in the CIM through point cloud registration, where the specific processes of the offset detection and the offset compensation are as follows:
1) dividing the city building image and the model frame which are collected in real time into N macro block images respectively, and comparing the city building image with each corresponding macro block image in the model frame.
2) Calculating the area intersection ratio of the key point Gaussian hot spots in each corresponding macro block image, and when the area intersection ratio is smaller than an area threshold, considering that the key point has large offset and needing offset compensation on the macro block image; otherwise, when the area sum ratio is larger than or equal to the area threshold, the macro block image is considered to be not offset or less offset, and the macro block image is stored in the image buffer area.
3) And for the macro block image with the area intersection ratio smaller than the area threshold, calculating an offset vector according to the three-dimensional coordinates of the key points in the macro block image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector to obtain an initial macro block compensation image.
Preferably, the embodiment of the present invention divides the city building image and the model frame into standard 16 × 16 macroblock images.
Preferably, the area threshold value in the embodiment of the present invention is 70%.
It should be noted that, by using the area intersection ratio of the gaussian hot spots of the key points, it is determined whether the key points have large offsets, and the offset vectors of all the key points can be specifically calculated on the premise of non-quantitative calculation, so that the calculation amount can be effectively reduced.
Further, since the offset compensation is only a relatively rough compensation method and continuous change of the city building image in the direction of the continuous motion vector is not considered in the offset compensation process, in step S004, the embodiment of the present invention performs compensation optimization by obtaining the motion vector of the initial macroblock compensation image. The specific process of obtaining the motion vector of the initial macro block compensation image is as follows:
1) the displacement of all pixels in the non-overlapping macro block images divided by the urban building image is the same, namely the macro block images after the offset compensation are used for finding out the macro block image which is the most similar to the current initial macro block compensation image in the reference frame according to a certain matching criterion for each initial macro block compensation image, namely the reference macro block image, and the relative displacement between the reference macro block image and the current initial macro block compensation image is the motion vector.
2) The reference frame is composed of macro block images in the urban building image which is greater than or equal to the area threshold value in the process of deviation detection, and is stored in the image buffer area, namely, in the urban building image of one frame which is divided into N macro block images, only all the macro block images which are greater than or equal to the area threshold value in the urban building image are stored in the image buffer area as the reference macro block images of the reference frame.
3) The matching criterion between the macro block images in the embodiment of the invention adopts the absolute error sum criterion, and the absolute error sum criterion has the advantages of no need of multiplication and simple and convenient realization. In addition, in the embodiment of the present invention, a Diamond Search method (DS) is adopted as a Search algorithm for finding an optimal reference macro block image, and the Search algorithm has the characteristics of simplicity, robustness and high efficiency, wherein a main Search process of the DS algorithm is as follows:
a. in general, the motion vector is always highly concentrated near the center of the search window, and this phenomenon is particularly obvious in the case of a video sequence in which an object moves slowly, mainly because a stationary block and a slow moving block are dominant, and the method is very suitable for a scene of aerial surveying and mapping by an unmanned aerial vehicle. The center offset property of the motion vector suggests that the best reference macro block image can be quickly searched by searching only the points near the center of the window without searching all the points in the window.
b. The initial stage is repeated by using the large diamond search template to search until the best reference macro block image falls on the center of the large diamond. Because the step length of the large diamond searching template is large, the searching range is wide, coarse positioning can be realized, and searching is not limited to local minimum.
c. After the rough positioning is finished, the optimal reference macro block image is considered to be in a diamond region surrounded by 8 macro block images around the large diamond template, and then the small diamond search template is used for realizing the accurate positioning of the optimal reference macro block image so as not to generate large fluctuation, thereby improving the motion estimation precision.
4) Through the step 3), the two most similar macroblock images can be matched, namely, the reference macroblock image of the current initial macroblock compensation image can be found. In the embodiment of the invention, the reference macro block image is searched according to a near-to-far criterion, and the search window is set to be M frames. When M frames of reference frames nearest to the current initial macro block compensation image can be matched with the reference macro block image, estimating motion vectors among macro blocks by utilizing an EPZS enhanced prediction region search algorithm to obtain the motion vectors of the current initial macro block compensation image; when the nearest M frame reference frame can not be matched with the reference macro block image, the adjacent macro block image adjacent to the current initial macro block compensation image is found according to the principle that the correlation of the motion of two adjacent macro blocks belonging to the same moving object is large, the reference macro block of the adjacent macro block image is matched in the nearest M frame reference frame, the motion vector estimation of the adjacent macro block image is obtained by utilizing an EPZS enhanced prediction region search algorithm, and then the motion vector of the current initial macro block compensation image is obtained through a motion vector estimation formula.
Wherein, the motion vector estimation formula is:
Figure BDA0002905912260000081
wherein epsilon is the motion vector of the compensation image of the current initial macro block,
Figure BDA0002905912260000082
is the weight, ε, of the motion vector of the ith neighboring macroblock imageiIs the motion vector of the ith adjacent macro block image, and j is the number of adjacent macro block images of the current initial macro block compensation image.
It should be noted that the weights are obtained according to the number of adjacent macroblock images, and as an example, when a macroblock image has three adjacent macroblock images adjacent to it, then the weight of the motion vector of each adjacent macroblock image is 1/3 respectively.
The EPZS enhanced prediction region search algorithm is a search algorithm for integer pixel motion estimation, and adopts a prediction method with higher correlation, that is, a more existing condition is utilized to predict a motion vector. The EPZS enhanced prediction region search algorithm mainly comprises the following steps:
a. and selecting motion vector prediction, namely selecting a search starting point according to the correlation between a time domain and a space domain.
b. The adaptation terminates early. Since the match errors of neighboring macroblocks are correlated, termination conditions are introduced to speed up the estimation of motion vectors.
c. And (5) correcting the motion vector. If the condition for the adaptive early termination is not satisfied, a further search is needed at the location where the match error is the smallest.
Preferably, in the embodiment of the present invention, a value of M in the nearest M frame reference frame is 10.
Preferably, the EPZS enhanced prediction region search algorithm in the embodiment of the present invention is a minimum diamond search algorithm.
Further, in step S005, compensation optimization is performed on the initial macroblock compensation image obtained after the offset compensation according to the obtained motion vector of each macroblock image in the real-time acquired urban building image.
Further, when the unmanned aerial vehicle performs surveying and mapping within a long period, a new building image is acquired, and when the CIM model does not contain information of the new building image, motion offset compensation cannot be performed according to the model frame, and information of the new building image cannot be predicted according to the previously shot image. Therefore, the embodiment of the invention carries out reverse prediction compensation on the new building image appearing in the collected urban building image by setting the image buffer area, and the specific compensation process is as follows:
1) and setting an image buffer area, assuming that 2A +1 frames of urban building images are used as an image compensation window in total in the image buffer area, wherein the current frame is A +1 frame, the current frame is the first appearance of a new building image, the frame is directly sent to a display picture for playing after being processed, then the window continuously slides backwards for one frame to be used as the current frame for processing compensation, and the like.
2) In the image compensation window, taking the previous a frame of the current frame as a reference frame, and performing compensation optimization by combining the offset compensation and the motion vector prediction in the step S003; and the last A frame of the current frame comprises a city building image of the new building image as a reference frame for reverse compensation so as to perform reverse prediction compensation on the current frame. The specific compensation principle is as follows: taking the frame rate of hundreds of frames per second of the current mainstream camera as an example, the acquired image frames are not always transmitted to the display screen in real time and immediately for display, but are displayed by delaying the a frame. Because the time corresponding to the frame a is very short and the requirement for real-time performance in urban surveying and mapping is not too high, the delay of the frame a does not bring a significant hysteresis effect to human eyes under the condition of real playing, i.e. the delay of the frame a is negligible.
3) When the current frame is processed, the current frame is compensated according to the motion vector of the next frame obtained from the previous A frame of the current frame and the motion vector of the next frame obtained from the backward A frame of the current frame.
4) Because the model frame without the new building image in the CIM model cannot judge whether the jelly effect occurs according to the city building image with the new building image in the later A frame, the detection can be carried out according to the correlation of the motion of the adjacent macro blocks, and the specific process is as follows:
a. and detecting adjacent macro block images of the macro block image where the new building image is located, and if one of the adjacent macro block images in the last A +1 frame from the current frame is smaller than the area threshold, considering that the macro block image where the new building image is located has a jelly effect, otherwise, indicating that the macro block image is a stable macro block image.
b. If a stable macro block image exists in the last A +1 frame, the stable macro block image is used as a reference macro block image, the motion vector of the current frame is reversely obtained by utilizing an EPZS enhanced prediction region search algorithm, and compensation is carried out according to the motion vector so as to better eliminate the jelly effect; if no stable macro block exists in the last A +1 frame, finding an adjacent macro block image adjacent to the macro block image of the new building image in the current frame, obtaining a motion vector of the adjacent macro block image of the new building image in the last A frame by using an EPZS enhanced prediction region search algorithm, compensating the last A frame according to the obtained motion vector, reversely calculating the motion vector of the current frame through the compensated last A frame, and further compensating the current frame.
Preferably, in the embodiment of the present invention, the value of the frame number a is 10.
In summary, the embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging method for an unmanned aerial vehicle for urban surveying and mapping, which matches a model frame of an urban building image in a CIM model according to a key point of the urban building image acquired in real time; dividing the urban building image and the model frame into standard 16 × 16 macro block images, performing offset compensation by comparing each corresponding macro block image to obtain an initial macro block compensation image, and further performing compensation optimization on the initial macro block compensation image according to the motion vector of each macro block image; and simultaneously performing image compensation on the newly appeared building image by using the motion vectors of the first 10 frames and the last 10 frames of the frame image. According to the method, the model frames are matched in the CIM, so that the matching accuracy can be improved, and the jelly effect can be well eliminated by performing offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.
Based on the same inventive concept as the method, the embodiment of the invention provides an artificial intelligence-based rolling shutter imaging system of an unmanned aerial vehicle for urban surveying and mapping.
Referring to fig. 3, an embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging system for an unmanned aerial vehicle for urban surveying and mapping, where the system includes: a keypoint detection unit 10, an image matching unit 20, an offset compensation unit 30, a motion vector prediction unit 40, and a compensation optimization unit 50.
The key point detection unit 10 is configured to acquire a city building image and a depth image of the city building image by using an image acquisition device, and acquire a key point of a building rigid body from the city building image by using a key point detection network, where the key point is each corner point and anchor frame of the building rigid body; the image matching unit 20 is configured to obtain a three-dimensional point cloud of the key point by combining the key point and the depth image, and match the three-dimensional point cloud of the key point with a corresponding model frame of the city building image in the CIM model; the offset compensation unit 30 is configured to divide the city building image and the model frame into N macroblock images, calculate an area intersection ratio of a key point gaussian hot spot in each corresponding macroblock image, and perform offset compensation on the macroblock image to obtain an initial macroblock compensation image when the area intersection ratio is smaller than an area threshold; otherwise, storing the macro block image in an image buffer area; the motion vector prediction unit 40 is configured to obtain a motion vector of the initial macroblock compensation image when the initial macroblock compensation image of the nearest M frames can all match with the corresponding reference macroblock image; otherwise, when the nearest M frames of initial macro block compensation images can not be matched with the corresponding reference macro block images, searching adjacent macro block images adjacent to the initial macro block compensation images in the image buffer area, and calculating the motion vectors of the adjacent macro block images to obtain the motion vectors of the initial macro block compensation images; the compensation optimization unit 50 is configured to perform compensation optimization on the initial macroblock compensation image correspondingly by using the motion vector.
Further, referring to fig. 4, when the model frame of the new building image cannot be matched in the CIM model in the image matching unit 20, the compensation method for the new building image includes a forward vector obtaining unit 21, a reverse vector obtaining unit 22, and an image compensation unit 23:
the forward vector obtaining unit 21 is configured to predict a forward motion vector of the current frame by using a macroblock image of a previous a frame of the current frame where a new building image appears; the backward vector obtaining unit 22 is configured to predict a backward motion vector of the current frame by using a macroblock image of a frame a after the current frame; the image compensation unit 23 is used for image compensation of the current frame by combining the forward motion vector and the backward motion vector.
Further, the method of offset compensation in the offset compensation unit 30 includes:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
Further, referring to fig. 5 and 6, the motion vector prediction unit 40 includes a first motion vector detection unit 41, and the first motion vector detection unit 41 is configured to obtain a motion vector of the initial macroblock compensation image through a search algorithm of motion estimation when the initial macroblock compensation image of the nearest M frames can all match with the corresponding reference macroblock image.
The motion vector unit 40 further includes a second motion vector detection unit 42 for acquiring a motion vector of the initial macroblock compensation image when none of the initial macroblock compensation images of the nearest neighboring M frames can match to the corresponding reference macroblock image, and the second motion vector detection unit 42 further includes a vector analysis unit 421 and a vector processing unit 422:
the vector analysis unit 421 is configured to obtain a motion vector of each adjacent macroblock image through a search algorithm of motion estimation; the vector processing unit 422 is configured to calculate an average motion vector of the neighboring macroblock image, and use the average motion vector as the motion vector of the initial macroblock compensation image.
In summary, the embodiment of the present invention provides an artificial intelligence-based rolling shutter imaging system for an unmanned aerial vehicle for urban surveying and mapping, where the system inputs the acquired images of the urban buildings into a key point detection unit 10 to obtain key points of the urban buildings; matching the model frame of the city building image in the image matching unit 20 according to the key points of the city building; offset compensation is carried out on the urban building image and the model frame by comparing each corresponding macro block image through an offset compensation unit 30 so as to obtain an initial macro block compensation image; further performing compensation optimization on the initial macro block compensation image through a motion vector detection unit 40 and a compensation optimization unit 50; and at the same time, the newly appeared building image is image-compensated by the forward vector acquisition unit 21, the reverse vector acquisition unit 22 and the image compensation unit 23. By matching the model frame in the CIM model, the matching accuracy can be improved, and the jelly effect can be well eliminated by performing offset compensation and motion vector compensation on each macro block image divided by the urban building image, so that the image quality of the image is better.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (10)

1. An artificial intelligence-based rolling shutter imaging method for an unmanned aerial vehicle for urban surveying and mapping is characterized by comprising the following steps:
acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
combining the key points and the depth image to obtain a key point three-dimensional point cloud, and matching the key point three-dimensional point cloud to a corresponding model frame of the urban building image in a CIM (common information model);
correspondingly dividing the urban building image and the model frame into N macro block images, calculating the area intersection ratio of the key point Gaussian hotspots in each corresponding macro block image, and when the area intersection ratio is smaller than an area threshold, performing offset compensation on the macro block images to obtain initial macro block compensation images; otherwise, storing the macro block image in an image buffer area;
when the initial macro block compensation images of the M frames which are nearest to each other can be matched with the corresponding reference macro block images, acquiring the motion vector of the initial macro block compensation images; otherwise, when none of the initial macro block compensation images of the nearest M frames can match the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
2. The method of claim 1, wherein when the model frame of a new building image cannot be matched in the CIM model, the compensating method for the new building image comprises:
predicting a forward motion vector of a current frame by using the macro block image of a previous A frame of the current frame in which the new building image appears;
backward predicting a backward motion vector of the current frame by using the macro block image of the frame A behind the current frame;
and performing image compensation on the current frame by combining the forward motion vector and the backward motion vector.
3. The method of claim 1, wherein the method of offset compensation comprises:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
4. The method as claimed in claim 1, wherein when all of the initial macroblock compensation images of the nearest neighboring M frames can be matched to the corresponding reference macroblock image, the motion vector of the initial macroblock compensation image is obtained by a search algorithm of motion estimation.
5. The method as claimed in claim 1, wherein said method for obtaining said motion vector of said initial macroblock compensation picture when none of said initial macroblock compensation pictures of the nearest neighboring M frames can match to the corresponding said reference macroblock picture comprises:
obtaining the motion vector of each adjacent macro block image through the search algorithm of the motion estimation;
and calculating the average motion vector of the adjacent macro block image, and taking the average motion vector as the motion vector of the initial macro block compensation image.
6. The utility model provides a city survey and drawing unmanned aerial vehicle rolling shutter imaging system based on artificial intelligence which characterized in that, this system includes:
the system comprises a key point detection unit, a data processing unit and a data processing unit, wherein the key point detection unit is used for acquiring a city building image and a depth image of the city building image by using image acquisition equipment, and acquiring key points of a building rigid body from the city building image by using a key point detection network, wherein the key points are each angular point and an anchor frame of the building rigid body;
the image matching unit is used for obtaining a key point three-dimensional point cloud by combining the key point and the depth image, and matching the key point three-dimensional point cloud in a CIM (common information model) to a corresponding model frame of the urban building image;
the offset compensation unit is used for correspondingly dividing the urban building image and the model frame into N macro block images, calculating the area intersection ratio of the key point Gaussian hotspots in each corresponding macro block image, and when the area intersection ratio is smaller than an area threshold value, performing offset compensation on the macro block images to obtain initial macro block compensation images; otherwise, storing the macro block image in an image buffer area;
a motion vector prediction unit, configured to obtain a motion vector of the initial macroblock compensation image when all the initial macroblock compensation images of the nearest M frames can be matched with corresponding reference macroblock images; otherwise, when none of the M nearest frames of the initial macro block compensation images can match the corresponding reference macro block image, searching an adjacent macro block image adjacent to the initial macro block compensation image in the image buffer area, and calculating the motion vector of the adjacent macro block image to obtain the motion vector of the initial macro block compensation image;
and the compensation optimization unit is used for correspondingly performing compensation optimization on the initial macro block compensation image by using the motion vector.
7. The system of claim 6, wherein the compensation method for the new architectural image when the model frame of the new architectural image cannot be matched in the CIM model in the image matching unit comprises:
a forward vector obtaining unit, configured to predict a forward motion vector of a current frame by using the macroblock image of a previous a frame of the current frame where the new building image appears;
a backward vector obtaining unit, configured to predict a backward motion vector of the current frame by using the macroblock image of the frame a after the current frame;
and the image compensation unit is used for carrying out image compensation on the current frame by combining the forward motion vector and the reverse motion vector.
8. The system of claim 6, wherein the method of offset compensation in the offset compensation unit comprises:
and obtaining an offset vector of the macro block image according to the three-dimensional coordinates of the key points in the macro block image in the city building image and the three-dimensional coordinates of the key points in the corresponding macro block image in the model frame, and performing offset compensation on the macro block image by using the offset vector.
9. The system as claimed in claim 6, wherein said motion vector prediction unit comprises a first motion vector detection unit for obtaining said motion vector of said initial macroblock compensation picture through a search algorithm of motion estimation when said initial macroblock compensation picture of said nearest neighbor M frame can be matched to a corresponding reference macroblock picture.
10. The system of claim 6, wherein the motion vector unit further comprises a second motion vector detection unit for obtaining the motion vector of the initial macroblock compensation picture when none of the initial macroblock compensation pictures of the nearest neighboring M frames can match the corresponding reference macroblock picture, and the second motion vector detection unit further comprises:
the vector analysis unit is used for obtaining the motion vector of each adjacent macro block image through the search algorithm of the motion estimation;
and the vector processing unit is used for calculating the average motion vector of the adjacent macro block image and taking the average motion vector as the motion vector of the initial macro block compensation image.
CN202110070839.2A 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle Active CN112906475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110070839.2A CN112906475B (en) 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110070839.2A CN112906475B (en) 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle

Publications (2)

Publication Number Publication Date
CN112906475A true CN112906475A (en) 2021-06-04
CN112906475B CN112906475B (en) 2022-08-02

Family

ID=76116006

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110070839.2A Active CN112906475B (en) 2021-01-19 2021-01-19 Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle

Country Status (1)

Country Link
CN (1) CN112906475B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470093A (en) * 2021-09-01 2021-10-01 启东市德立神起重运输机械有限公司 Video jelly effect detection method, device and equipment based on aerial image processing
CN115379123A (en) * 2022-10-26 2022-11-22 山东华尚电气有限公司 Transformer fault detection method for inspection by unmanned aerial vehicle
CN115876785A (en) * 2023-02-02 2023-03-31 苏州誉阵自动化科技有限公司 Visual identification system for product defect detection

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5661524A (en) * 1996-03-08 1997-08-26 International Business Machines Corporation Method and apparatus for motion estimation using trajectory in a digital video encoder
CN101483713A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Deinterleaving method based on moving target
US20090237516A1 (en) * 2008-02-20 2009-09-24 Aricent Inc. Method and system for intelligent and efficient camera motion estimation for video stabilization
CN101945284A (en) * 2010-09-29 2011-01-12 无锡中星微电子有限公司 Motion estimation device and method
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN102523419A (en) * 2011-12-31 2012-06-27 上海大学 Digital video signal conversion method based on motion compensation
CN102917217A (en) * 2012-10-18 2013-02-06 北京航空航天大学 Movable background video object extraction method based on pentagonal search and three-frame background alignment
CN103096083A (en) * 2013-01-23 2013-05-08 北京京东方光电科技有限公司 Method and device of moving image compensation
WO2014000636A1 (en) * 2012-06-25 2014-01-03 北京大学深圳研究生院 Method for motion vector prediction and visual disparity vector prediction of multiview video coding
CN103581647A (en) * 2013-09-29 2014-02-12 北京航空航天大学 Depth map sequence fractal coding method based on motion vectors of color video

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5661524A (en) * 1996-03-08 1997-08-26 International Business Machines Corporation Method and apparatus for motion estimation using trajectory in a digital video encoder
US20090237516A1 (en) * 2008-02-20 2009-09-24 Aricent Inc. Method and system for intelligent and efficient camera motion estimation for video stabilization
US8130277B2 (en) * 2008-02-20 2012-03-06 Aricent Group Method and system for intelligent and efficient camera motion estimation for video stabilization
CN101483713A (en) * 2009-01-16 2009-07-15 西安电子科技大学 Deinterleaving method based on moving target
CN101945284A (en) * 2010-09-29 2011-01-12 无锡中星微电子有限公司 Motion estimation device and method
CN102098440A (en) * 2010-12-16 2011-06-15 北京交通大学 Electronic image stabilizing method and electronic image stabilizing system aiming at moving object detection under camera shake
CN102523419A (en) * 2011-12-31 2012-06-27 上海大学 Digital video signal conversion method based on motion compensation
WO2014000636A1 (en) * 2012-06-25 2014-01-03 北京大学深圳研究生院 Method for motion vector prediction and visual disparity vector prediction of multiview video coding
CN102917217A (en) * 2012-10-18 2013-02-06 北京航空航天大学 Movable background video object extraction method based on pentagonal search and three-frame background alignment
CN103096083A (en) * 2013-01-23 2013-05-08 北京京东方光电科技有限公司 Method and device of moving image compensation
CN103581647A (en) * 2013-09-29 2014-02-12 北京航空航天大学 Depth map sequence fractal coding method based on motion vectors of color video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
V.COUNTURE ET AL: "Two-frame frequency-based estimation of local motion parallax direction in 3D cluttered scenes", 《2007 6TH INTERNATIONAL CONFERENCE ON 3-D DIGITAL IMAGING AND MODELING》 *
马亮: "MPEG-4解码器的运动补偿VLSI设计与实现", 《中国优秀硕士学位论文全文数据库》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113470093A (en) * 2021-09-01 2021-10-01 启东市德立神起重运输机械有限公司 Video jelly effect detection method, device and equipment based on aerial image processing
CN115379123A (en) * 2022-10-26 2022-11-22 山东华尚电气有限公司 Transformer fault detection method for inspection by unmanned aerial vehicle
CN115379123B (en) * 2022-10-26 2023-01-31 山东华尚电气有限公司 Transformer fault detection method for unmanned aerial vehicle inspection
CN115876785A (en) * 2023-02-02 2023-03-31 苏州誉阵自动化科技有限公司 Visual identification system for product defect detection

Also Published As

Publication number Publication date
CN112906475B (en) 2022-08-02

Similar Documents

Publication Publication Date Title
CN112906475B (en) Artificial intelligence-based rolling shutter imaging method and system for urban surveying and mapping unmanned aerial vehicle
WO2017096949A1 (en) Method, control device, and system for tracking and photographing target
US10984583B2 (en) Reconstructing views of real world 3D scenes
US9014421B2 (en) Framework for reference-free drift-corrected planar tracking using Lucas-Kanade optical flow
CN111382613B (en) Image processing method, device, equipment and medium
CN106210449B (en) Multi-information fusion frame rate up-conversion motion estimation method and system
CN105678809A (en) Handheld automatic follow shot device and target tracking method thereof
CN112668432A (en) Human body detection tracking method in ground interactive projection system based on YoloV5 and Deepsort
WO2019221013A4 (en) Video stabilization method and apparatus and non-transitory computer-readable medium
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN106412441B (en) A kind of video stabilization control method and terminal
KR20100104591A (en) Method for fabricating a panorama
CN110006444B (en) Anti-interference visual odometer construction method based on optimized Gaussian mixture model
CN112207821B (en) Target searching method of visual robot and robot
CN109712177A (en) Image processing method, device, electronic equipment and computer readable storage medium
US6122319A (en) Motion compensating apparatus using gradient pattern matching and method thereof
CN110781962A (en) Target detection method based on lightweight convolutional neural network
CN111598775B (en) Light field video time domain super-resolution reconstruction method based on LSTM network
CN112529962A (en) Indoor space key positioning technical method based on visual algorithm
CN114399539A (en) Method, apparatus and storage medium for detecting moving object
CN114973399A (en) Human body continuous attitude estimation method based on key point motion estimation
CN112884803B (en) Real-time intelligent monitoring target detection method and device based on DSP
US20230290061A1 (en) Efficient texture mapping of a 3-d mesh
CN116883897A (en) Low-resolution target identification method
CN117014716A (en) Target tracking method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230803

Address after: 450000, Floor 20, Unit 1, Building 7, Lin8, Lvyin Road, High tech Industrial Development Zone, Zhengzhou City, Henan Province

Patentee after: Zhengzhou Gaosun Information Technology Co.,Ltd.

Address before: Room 195, 18 / F, unit 2, building 6, 221 Jinsuo Road, high tech Industrial Development Zone, Zhengzhou City, Henan Province, 450000

Patentee before: Zhengzhou Kaiwen Electronic Technology Co.,Ltd.

TR01 Transfer of patent right