CN117237240B - Data intelligent acquisition method and system based on data characteristics - Google Patents

Data intelligent acquisition method and system based on data characteristics Download PDF

Info

Publication number
CN117237240B
CN117237240B CN202311515009.1A CN202311515009A CN117237240B CN 117237240 B CN117237240 B CN 117237240B CN 202311515009 A CN202311515009 A CN 202311515009A CN 117237240 B CN117237240 B CN 117237240B
Authority
CN
China
Prior art keywords
image
driving
pixel point
gray
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311515009.1A
Other languages
Chinese (zh)
Other versions
CN117237240A (en
Inventor
陈海文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Yiwei Software Co ltd
Original Assignee
Hunan Yiwei Software Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Yiwei Software Co ltd filed Critical Hunan Yiwei Software Co ltd
Priority to CN202311515009.1A priority Critical patent/CN117237240B/en
Publication of CN117237240A publication Critical patent/CN117237240A/en
Application granted granted Critical
Publication of CN117237240B publication Critical patent/CN117237240B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of image processing, in particular to a data intelligent acquisition method and system based on data characteristics, comprising the following steps: acquiring a driving gray image of each frame in driving video data, acquiring the speed and the steering angle of a vehicle, acquiring a visual field center point according to the driving gray image, acquiring the distance between any pixel point in the driving gray image and the visual field center point according to the visual field center point, acquiring a correction distance according to the steering angle of the vehicle, acquiring the position variation of the pixel point according to the correction distance and the speed of the vehicle, acquiring the moving direction of the pixel point, acquiring the driving video data after noise reduction according to the position variation of the pixel point and the moving direction, and storing the driving video data. According to the invention, more constraints can be provided by utilizing the information in the multi-frame driving gray level image to inhibit noise and enhance image details, and a cleaner and richer-detail image result is generated by integrating the information of the multi-frame image, so that the driving video data after noise reduction is obtained.

Description

Data intelligent acquisition method and system based on data characteristics
Technical Field
The invention relates to the technical field of image processing, in particular to a data intelligent acquisition method and system based on data characteristics.
Background
In recent years, artificial intelligence has rapidly developed, particularly in the field of automatic driving, and training of artificial intelligence in the field of automatic driving requires a large number of driving video samples, and the quality of the samples affects the training results, so image noise reduction is required. When the driving video is in severe weather, the severe weather conditions comprise rain and snow, haze, wind and sand and the like, and the problems of noise, blurring, contrast reduction and the like in the image can be caused by the weather conditions, so that the method is particularly critical to the driving video image processing under the severe weather conditions.
At present, aiming at noise reduction of driving video images in severe weather, non-local mean filtering is one of common techniques. However, the traditional non-local mean filtering searches the whole image, the calculation amount is large in the searching process, and for stronger noise, a proper similar block is difficult to find, so that the noise reduction effect is poor.
Disclosure of Invention
In order to solve the problems, the invention provides a data intelligent acquisition method and a system based on data characteristics.
The intelligent data acquisition method based on the data characteristics adopts the following technical scheme:
an embodiment of the invention provides a data intelligent acquisition method based on data characteristics, which comprises the following steps:
acquiring a driving gray image of each frame in driving video data; acquiring the speed and steering angle of a vehicle in the driving video data, and acquiring a visual field center point of a driving gray scale image according to the driving gray scale image for the driving gray scale image of any frame;
establishing a rectangular coordinate system according to the driving gray image, and obtaining the distance between any pixel point in the driving gray image and the visual field center point according to the position information of any pixel point in the driving gray image and the visual field center point of the driving gray image in the rectangular coordinate system;
obtaining a correction distance between any pixel point in the driving gray level image and the vision center point according to the distance between any pixel point in the driving gray level image and the vision center point and the steering angle of the vehicle, and obtaining the position variation of any pixel point in the driving gray level image of the next frame according to the correction distance between any pixel point in the driving gray level image and the vision center point and the speed of the vehicle;
obtaining the moving direction of any pixel point in the driving gray image according to the vector formed by the visual field center point and any pixel point of the driving gray image;
and obtaining the noise-reduced driving video data according to the position variation of any pixel point in the driving gray level image of the next frame and the moving direction of any pixel point in the driving gray level image, and storing the noise-reduced driving video data.
Further, the specific method for acquiring the speed and the steering angle of the vehicle in the driving video data comprises the following steps:
the speed and steering angle of the vehicle are obtained through the GPS of the vehicle.
Further, the step of obtaining the center point of the field of view of the driving gray image according to the driving gray image comprises the following specific steps:
edge detection is carried out on a driving gray image by using a Canny operator to obtain a plurality of closed edges in the driving gray image, an angle threshold is preset, for any one closed edge, pixel points with the slope change of continuous pixel points on the closed edge exceeding the preset angle threshold are used as base points, all base points on all the closed edges are used as segmentation points to segment all the closed edges to obtain a plurality of edge sections, a length threshold is preset, for any one edge section, if the number of the pixel points contained in the edge section is greater than the preset length threshold, the edge section is reserved, all reserved edge sections are extended to two sides of the edge section, the extending direction is the tangential direction of the end points of each edge section, a plurality of intersection points are obtained, and all the intersection points are inputIn the clustering, the number parameter of the clustering center is preset>,/>And presetting a first parameter, wherein a central point of the clustering result is used as a visual field central point of the driving gray image.
Further, the establishing a rectangular coordinate system according to the driving gray image comprises the following specific steps:
and taking the lower left corner of the driving gray image as an origin, taking the horizontal right as an x-axis and taking the vertical upward as a y-axis to establish a rectangular coordinate system.
Further, the step of obtaining the distance between any one pixel point in the driving gray image and the center point of the field of view according to the position information of any one pixel point in the driving gray image and the center point of the field of view of the driving gray image in the rectangular coordinate system comprises the following specific steps:
in the method, in the process of the invention,is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the ordinate of the kth pixel point in the gray level image of the driving vehicle,>is the abscissa of the center point of the field of view of the gray image of the driving, +.>Is the ordinate of the center point of the field of view of the gray image of the driving vehicle,/>The Euclidean distance between the kth pixel point and the center point of the visual field in the driving gray level image.
Further, the step of obtaining the corrected distance between any one pixel point in the driving gray level image and the center point of the visual field according to the distance between any one pixel point in the driving gray level image and the center point of the visual field and the steering angle of the vehicle comprises the following specific steps:
in the method, in the process of the invention,is the Euclidean distance between the kth pixel point and the center point of the visual field in the gray level image of the driving vehicle, and is->Is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the abscissa of the center point of the field of view of the gray image of the driving, +.>As an exponential function based on natural constants, < +.>For the steering angle of the vehicle +.>The correction distance between the kth pixel point and the center point of the visual field in the driving gray level image is obtained.
Further, the step of obtaining the position variation of the driving gray image of the next frame of any pixel point in the driving gray image according to the corrected distance between any pixel point in the driving gray image and the center point of the field of view and the speed of the vehicle comprises the following specific steps:
in the method, in the process of the invention,for the speed of the vehicle>Is a preset constant->For the kth pixel point in the gray image of the driving and the field of viewCorrection distance of heart point->As an exponential function based on natural constants, < +.>Representing a linear normalization function, ++>The position variation of the kth pixel point in the driving gray image of the next frame is obtained.
Further, the method for obtaining the moving direction of any one pixel point in the driving gray image according to the vector formed by the visual field center point and any one pixel point of the driving gray image comprises the following specific steps:
in the method, in the process of the invention,a unit vector comprising a center point of a field of view of the gray-scale image of the vehicle as a start point and a kth pixel point of the gray-scale image of the vehicle as an end point +.>The unit vector is composed of the center point of the field of view of the driving gray image as the starting point and the center point of the driving gray image as the end point>For the steering angle of the vehicle +.>Is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the abscissa of the center point of the field of view of the gray image of the driving, +.>To take absolute value, +.>Representing a linear normalization function, ++>The moving direction of the kth pixel point in the driving gray level image is shown.
Further, the step of obtaining the noise-reduced driving video data according to the position variation of any pixel point in the driving gray image of the next frame and the moving direction of any pixel point in the driving gray image, and storing the noise-reduced driving video data comprises the following specific steps:
the method comprises the steps of marking any frame of driving gray level image as a first image, marking the next frame of driving gray level image of the first image as a second image, marking the next frame of driving gray level image of the second image as a third image, marking the kth pixel point in the first image as a third imageAccording to +.>In the position change of the second image and in the first image +.>Is moved in the first image to obtain +.>In the pixel point corresponding to the second image, marked as +.>Will->In the pixel point corresponding to the third image, it is marked +.>
Presetting the window size of non-local mean filtering as followsN is a preset value, in order to->The range formed for a center neighborhood radius R is denoted +.>To->The range formed for a center neighborhood radius R is denoted +.>To->The range formed for a center neighborhood radius R is denoted +.>R is a preset first value, will ∈>、/>Is->The range formed by sequential splicing is marked as S, wherein S is a rectangular area, and non-local average filtering is carried out according to a preset window of non-local average filtering and S, so that a gray value of a k pixel point in the first image after filtering is obtained;
and for any frame of driving gray image, obtaining a filtered driving gray image according to the gray value of each pixel point in the driving gray image, recording a video formed by the filtered driving gray image of each frame in the driving video data as driving video data after noise reduction, and storing the driving video data after noise reduction.
The invention also provides a data intelligent acquisition system based on the data characteristics, which comprises a memory and a processor, wherein the processor executes a computer program stored in the memory to realize the steps of the method.
The technical scheme of the invention has the beneficial effects that: according to the invention, the similar areas are searched for at the corresponding positions of each point in the driving gray level image in the multi-frame image, so that searching in the whole image is avoided, and the searching efficiency is improved;
according to the position variation of any pixel point in the driving gray image of the next frame and the moving direction of any pixel point in the driving gray image, the driving video data after noise reduction is obtained, and similar areas are searched by utilizing non-local mean filtering at the corresponding positions of each point in the driving gray image in the multi-frame image, so that searching in the whole image is avoided, and the searching efficiency is improved;
according to the invention, more constraints can be provided by utilizing the information in the multi-frame driving gray level image to inhibit noise and enhance image details, and a stronger denoising effect can be realized by integrating the information of the multi-frame image, so that a cleaner and richer image result is generated, further driving video data after noise reduction is obtained, and subsequent better training is facilitated;
according to the method and the device for obtaining the position of the multi-frame image, the position corresponding to the pixel point in the multi-frame image is obtained through the running information of the vehicle, and the calculated corresponding position is more reliable by considering the speed and the steering information of the vehicle during calculation.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart illustrating steps of a data feature-based intelligent data collection method according to an embodiment of the present invention.
Detailed Description
In order to further describe the technical means and effects adopted by the invention to achieve the preset aim, the following detailed description is given below of the data feature-based intelligent data acquisition method and system according to the invention, and the detailed implementation, structure, features and effects thereof, with reference to the accompanying drawings and preferred embodiments. In the following description, different "one embodiment" or "another embodiment" means that the embodiments are not necessarily the same. Furthermore, the particular features, structures, or characteristics of one or more embodiments may be combined in any suitable manner.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following specifically describes a specific scheme of the data intelligent acquisition method based on the data characteristics provided by the invention with reference to the accompanying drawings.
Referring to fig. 1, a flowchart illustrating steps of a data feature-based intelligent data collection method according to an embodiment of the present invention is shown, where the method includes the following steps:
and S001, acquiring a driving gray image of each frame in the driving video data.
It should be noted that, in this embodiment, the corresponding point positions of the images in the multi-frame images are calculated according to the running information of the vehicle, so that the non-local mean filtering is performed on the multi-frame images to achieve a better noise reduction effect on the images.
It should be further noted that, in the scene of the present embodiment, in the driving video under severe weather, problems such as noise, blurring, and contrast reduction occur in the image due to weather conditions, so that the image needs to be subjected to noise reduction processing, and before specific processing is performed, driving video data needs to be acquired first.
Specifically, the vehicle-mounted camera is used for shooting the driving video data, and because the driving video data is composed of multiple frames of driving images, in order to obtain the optimal noise reduction effect, in this embodiment, the driving image of each frame is analyzed, and the driving image of each frame is subjected to gray processing to obtain the driving gray image of each frame in the driving video data.
The driving video data mainly includes road information, the driving image includes an irrelevant background area such as a lane line and sky, and the lane line is below the driving image.
And obtaining the driving gray level image of each frame in the driving video data.
Step S002, acquiring the speed and steering angle of the vehicle in the driving video data, and obtaining the center point of the field of view of the driving gray scale image according to the driving gray scale image for the driving gray scale image of any frame.
It should be noted that, the principle of non-average filtering is to find the accumulated regions in the image, and perform the noise reduction processing by using the average value of these similar regions, and because the similar regions are found in the whole image, there is a possibility that suitable similar regions cannot be found, resulting in poor noise reduction effect. The optimal effect should be to compare the areas of the same location where the noise is different, so that the best effect is obtained.
Because the vehicle itself moves, objects such as raindrops and snowflakes also have movement of the vehicle itself in severe weather, and therefore, for image noise in driving video in severe weather, the position of the image noise in adjacent frame images also changes. This means that it is possible to denoise a single frame image by a plurality of frame images, so that each region can be denoised by a similar region in a different frame image.
Specifically, the speed and the steering angle of the vehicle are obtained, specifically as follows:
the speed and steering angle of the vehicle are obtained through the GPS of the vehicle.
The position change of the pixel points in the driving image in the multi-frame driving image is related to the speed of the vehicle driving, and the faster the vehicle speed is, the larger the distance between each pixel point in the driving image in the adjacent frame is; and the steering angle of the vehicle is related, when the vehicle steers, the distance change of pixel points at different positions in the driving image is different, and the larger the steering angle is, the larger the difference is; and the closer the pixel is to the vehicle, i.e. the lower it is in the driving image, the greater its distance variation, as a function of the position of the pixel in the driving image.
Because the driving image of the driving video has a visual effect of being close to or far from, road information such as lane lines in the image gradually converges towards the center point of the visual field of the image, and the road information of the image gradually converges at the center point of the visual field of the image. The distance between the pixel points in the multi-frame image is related to the relative position of the center of the visual field, and the road information such as the lane line is not included above the center point of the visual field, so that the noise reduction processing is not performed on the road information. Therefore, firstly, the center point of the visual field of the gray level image of the driving is found.
Specifically, for a driving gray image of any frame, a field center point of the driving gray image is obtained according to the driving gray image, and the specific steps are as follows:
edge detection is performed on a running gray image by using a Canny operator to obtain a plurality of closed edges in the running gray image, a preset angle threshold is described by taking the preset angle threshold as 45 DEG, for any one closed edge, pixel points with the slope change of continuous pixel points on the closed edge exceeding the preset angle threshold are taken as base points, all base points on all closed edges are taken as segmentation points, all closed edges are segmented to obtain a plurality of edge segments, a preset length threshold is described by taking the preset length threshold as 50, for any one edge segment, if the number of pixel points contained in the edge segment is greater than the preset length threshold, the edge segment is reserved, all reserved edge segments extend to two sides of the edge segment, the extending direction is the tangential direction of the end point of each edge segment, the tangential direction of the end point is obtained as a tangential direction of the end point, the embodiment is not repeated, a plurality of intersection points are obtained, and all intersection points are input into a known technologyIn the clustering, the number parameter of the clustering center is preset>,/>For presetting a first parameter, taking the central point of the clustering result as the central point of the visual field of the driving gray image, +.>
Thus, the visual field center point of the driving gray level image is obtained.
And S003, establishing a rectangular coordinate system according to the driving gray level image, and obtaining the distance between any pixel point in the driving gray level image and the center point of the field of view according to the position information of any pixel point in the driving gray level image and the center point of the field of view of the driving gray level image in the rectangular coordinate system.
It is noted that, from the above analysis, the position change of the pixel point in the driving image in the multi-frame driving image is related to the speed of the vehicle driving, and the faster the vehicle speed is, the larger the distance between each pixel point in the driving image of the driving video in the driving image of the adjacent frame is. And also on the distance of the pixel point from the center point of the visual field, if the pixel point is positioned below the center point of the visual field, the pixel point possibly contains road information of the image, and the larger the distance from the center point of the visual field is, which means that the closer the pixel point is to a running vehicle, the larger the position change of the pixel point in the multi-frame image is.
Specifically, a rectangular coordinate system is established according to the driving gray level image, and the distance between any one pixel point in the driving gray level image and the center point of the field of view of the driving gray level image is obtained according to the position information of any one pixel point in the driving gray level image and the center point of the field of view of the driving gray level image in the rectangular coordinate system, specifically as follows:
and the lower left corner of the driving gray image is taken as an origin, the horizontal right is taken as an x axis, the vertical upward is taken as a y axis, and a rectangular coordinate system is established, so that the coordinates of pixel points in the driving gray image can be conveniently acquired subsequently.
In the method, in the process of the invention,is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the ordinate of the kth pixel point in the gray level image of the driving vehicle,>is the abscissa of the center point of the field of view of the gray image of the driving, +.>Is the ordinate of the center point of the field of view of the gray image of the driving vehicle,/>The Euclidean distance between the kth pixel point and the center point of the visual field in the driving gray level image.
It should be noted that the number of the substrates,in this case, since the k-th pixel point in the driving gray-scale image is located above the center point of the visual field and the area above the visual field point in the driving gray-scale image does not include road information, the noise reduction process is not performed in the present embodimentIn the driving gray level image, the Euclidean distance between the kth pixel point and the center point of the visual field is set to be 0, < >>In this case, the region below the center point of the visual field, that is, below the center point of the visual field, in the driving gray-scale image contains road information, and the larger the Euclidean distance between the k pixel point and the center point of the visual field in the driving gray-scale image, the description of the (th) pixel point in the driving gray-scale image>The closer a pixel is to the vehicle, the greater its position change in the multi-frame image at this time.
Thus, the distance between any pixel point in the driving gray image and the center point of the visual field is obtained.
Step S004, according to the distance between any pixel point in the driving gray level image and the center point of the visual field and the steering angle of the vehicle, the correction distance between any pixel point in the driving gray level image and the center point of the visual field is obtained, and according to the correction distance between any pixel point in the driving gray level image and the center point of the visual field and the speed of the vehicle, the position change quantity of the driving gray level image of any pixel point in the driving gray level image in the next frame is obtained.
When the vehicle turns, the distance change of the pixel points at different positions in the driving image is different, and the larger the turning angle is, the larger the difference is. The closer to the side of the turn, the smaller the change in distance in the image, and the farther from the side of the turn, the larger the change in distance in the image. The distance is the Euclidean distance between any pixel point in the driving gray level image and the center point of the visual field, and the Euclidean distance is accurate when the vehicle runs straight, but the error can occur when the Euclidean distance is used at the moment due to the change of the center point of the visual field and the fact that the actual movement distance is an arc line when the vehicle turns, and the larger the turning angle is, the larger the error is. Therefore, it is necessary to correct the euclidean distance, which is the distance between the pixel point and the center point of the visual field, calculated above, based on the steering information of the vehicle.
When the vehicle turns, the center point of the visual field is close to the turning side, that is, when the steering angle is the largestIn this case, the actual center point of the visual field is almost near the vehicle, but the actual calculated center point of the visual field is deviated, so that the actual distance to the pixel on the turning side is smaller than the calculated Euclidean distance, and the larger the steering angle is, the larger the error between the actual distance and the calculated Euclidean distance is, and therefore the more the distance to the pixel on the turning side should be correctedFor the side far away from the turning, since the actual distance is an arc, when the steering angle is larger, the radian of the arc is larger, and at the moment, the Euclidean distance of the pixel point on the side is larger to be corrected.
Specifically, according to the distance between any one pixel point in the driving gray level image and the center point of the visual field and the steering angle of the vehicle, the correction distance between any one pixel point in the driving gray level image and the center point of the visual field is obtained, specifically as follows:
for a driving gray image of any one frame, in the formula,is the Euclidean distance between the kth pixel point and the center point of the visual field in the gray level image of the driving vehicle, and is->Is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the abscissa of the center point of the field of view of the gray image of the driving, +.>As an exponential function based on natural constants, < +.>For the steering angle of the vehicle, the greater its value is +.>The greater the error of (a) is, the greater the degree of correction is, and in the present embodiment, the left steering of the vehicle is setPositive value, vehicle steering right +.>Negative value, & lt>The correction distance between the kth pixel point and the center point of the visual field in the driving gray image is set; since the correction distance is calculated only when turning, no correction is required when going straight, but +.>When turning leftAnd right turn +.>Thus->The input of the model is necessarily a negative number, and the larger the absolute value of the angle is, the larger the absolute value of the horizontal coordinate difference value is, the smaller the model output value is, and the inverse proportion relation and normalization processing are realized. Here, the running gray image of any one frame is also analyzed.
It is to be noted that, as the vehicle speed increases, the distance between the pixel point and the center point of the visual field increases, and the positional change of the pixel point in the multi-frame image increases, so that the positional change amount of each pixel point is calculated.
Specifically, according to the correction distance between any pixel point in the driving gray level image and the center point of the visual field and the speed of the vehicle, the position variation of the driving gray level image of the next frame of any pixel point in the driving gray level image is obtained, specifically as follows:
in the method, in the process of the invention,for the speed of the vehicle>As a preset constant reflecting a conversion constant of the vehicle speed information into the position conversion in the image, the present embodiment describes about 100 as the preset constant, +.>For the correction distance between the kth pixel point and the center point of the visual field in the gray level image of the driving vehicle, +.>As an exponential function based on natural constants, < +.>Representing a linear normalization function, the normalization object being +.>,/>The position variation of the kth pixel point in the driving gray image of the next frame is obtained.
It should be noted that the number of the substrates,the larger the value of (2), the closer the kth pixel point in the driving gray image is to the vehicle, the larger the position change of the kth pixel point in the multi-frame is due to +.>Is nonlinear with respect to the position change in the image, and +.>The larger the position change of the pixel point in the image is, the larger the +.>At small, the position change of the pixel point in the image is essentially negligible, so +.>To express +.>Relationship to the change in position of the pixel point in the image.
So far, the position variation of any pixel point in the driving gray image of the next frame is obtained.
Step S005, according to the vector formed by the visual field center point and any one pixel point of the driving gray image, the moving direction of any one pixel point in the driving gray image is obtained.
When the vehicle is traveling in a normal straight line, the position of the pixel point in the traveling gray image of the following frame is the direction of the line between the center point of the field of view and the pixel point, that is, the unit vector direction. However, when the vehicle is steering, the vector will have an error, and the larger the absolute value of the steering angle is, the larger the error of the unit vector is. The further away from the steering direction, the larger the error of the unit vector. Therefore, it is necessary to correct the direction of the unit vector.
Specifically, according to a vector formed by a visual field center point and any one pixel point of the driving gray image, the moving direction of any one pixel point in the driving gray image is obtained, specifically as follows:
in the method, in the process of the invention,a unit vector comprising a center point of a field of view of the gray-scale image of the vehicle as a start point and a kth pixel point of the gray-scale image of the vehicle as an end point +.>The unit vector is composed of the center point of the field of view of the driving gray image as the starting point and the center point of the driving gray image as the end point>For the steering angle of the vehicle +.>Is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the abscissa of the center point of the field of view of the gray image of the driving, +.>To take absolute value, +.>Representing a linear normalization function, the normalization object being +.>,/>The moving direction of the kth pixel point in the driving gray image is the corrected direction.
It should be noted that the number of the substrates,the larger the correction degree of the moving direction of the pixel point in the driving gray image in the multi-frame driving gray image is, the more the correction degree is, which shows that the pixel point is far from the center point of the visual field +.>The farther it is, the greater the degree of correction of the moving direction in the multi-frame image.
Thus, the moving direction of any pixel point in the driving gray image is obtained.
Step S006, obtaining the noise-reduced driving video data according to the position variation of any pixel point in the driving gray level image of the next frame and the moving direction of any pixel point in the driving gray level image, and storing the noise-reduced driving video data.
Specifically, a driving gray level image of any frame is recorded as a first image, a driving gray level image of the next frame of the first image is recorded as a second image, and the second image is recordedThe gray level image of the next frame of driving of the image is marked as a third image, and the kth pixel point in the first image is marked asAccording to +.>In the position change of the second image and in the first image +.>Is moved in the first image to obtain +.>In the pixel point corresponding to the second image, marked as +.>Will->In the pixel point corresponding to the third image, it is marked +.>
Further, the window size of the non-local mean filtering is preset to beN is a predetermined value, in this embodiment, n=3 is taken as an example, and +.>The range formed for a center neighborhood radius R is denoted +.>To->The range formed for a center neighborhood radius R is denoted +.>To->The range formed for a center neighborhood radius R is denoted +.>R is a preset first value, in this embodiment, R=50 is taken as an example, and +.>、/>Is->The range formed by sequential stitching is recorded as S, wherein S is a rectangular area, non-local mean filtering is carried out according to a preset window of non-local mean filtering and S, the gray value of the k pixel point in the first image after filtering is obtained, for any frame of driving gray image, the driving gray image after filtering is obtained according to the gray value of each pixel point in the driving gray image, the video formed by the driving gray image after filtering of each frame in the driving video data is recorded as driving video data after noise reduction, and the driving video data after noise reduction is stored for subsequent training.
It should be noted that, according to the preset window of the non-local mean filtering and S, the non-local mean filtering is performed to obtain the gray value of the kth pixel point in the first image after filtering, which is not described in detail in this embodiment, when determining the range, if the kth pixel point in any frame of driving gray image is at the edge of the image, the range formed by the radius of the central neighborhood is obtained and exceeds the boundary of the driving gray image, and at this time, the embodiment uses the quadratic linear interpolation method to interpolate and fill the data in the part exceeding the boundary of the driving gray image.
Through the steps, the intelligent data acquisition method based on the data characteristics is completed.
Another embodiment of the present invention provides a data feature-based data intelligent acquisition system, the system including a memory and a processor that, when executing a computer program stored in the memory, performs the following operations:
acquiring a driving gray image of each frame in driving video data; acquiring the speed and steering angle of a vehicle in the driving video data, and acquiring a visual field center point of a driving gray scale image according to the driving gray scale image for the driving gray scale image of any frame; establishing a rectangular coordinate system according to the driving gray image, and obtaining the distance between any pixel point in the driving gray image and the visual field center point according to the position information of any pixel point in the driving gray image and the visual field center point of the driving gray image in the rectangular coordinate system; obtaining a correction distance between any pixel point in the driving gray level image and the vision center point according to the distance between any pixel point in the driving gray level image and the vision center point and the steering angle of the vehicle, and obtaining the position variation of any pixel point in the driving gray level image of the next frame according to the correction distance between any pixel point in the driving gray level image and the vision center point and the speed of the vehicle; obtaining the moving direction of any pixel point in the driving gray image according to the vector formed by the visual field center point and any pixel point of the driving gray image; and obtaining the noise-reduced driving video data according to the position variation of any pixel point in the driving gray level image of the next frame and the moving direction of any pixel point in the driving gray level image, and storing the noise-reduced driving video data.
The above description is only of the preferred embodiments of the present invention and is not intended to limit the invention, but any modifications, equivalent substitutions, improvements, etc. within the principles of the present invention should be included in the scope of the present invention.

Claims (8)

1. The intelligent data acquisition method based on the data characteristics is characterized by comprising the following steps of:
acquiring a driving gray image of each frame in driving video data; acquiring the speed and steering angle of a vehicle in the driving video data, and acquiring a visual field center point of a driving gray scale image according to the driving gray scale image for the driving gray scale image of any frame;
establishing a rectangular coordinate system according to the driving gray image, and obtaining the distance between any pixel point in the driving gray image and the visual field center point according to the position information of any pixel point in the driving gray image and the visual field center point of the driving gray image in the rectangular coordinate system;
obtaining a correction distance between any pixel point in the driving gray level image and the vision center point according to the distance between any pixel point in the driving gray level image and the vision center point and the steering angle of the vehicle, and obtaining the position variation of any pixel point in the driving gray level image of the next frame according to the correction distance between any pixel point in the driving gray level image and the vision center point and the speed of the vehicle;
obtaining the moving direction of any pixel point in the driving gray image according to the vector formed by the visual field center point and any pixel point of the driving gray image;
obtaining noise-reduced driving video data according to the position variation of any pixel point in the driving gray image of the next frame and the moving direction of any pixel point in the driving gray image, and storing the noise-reduced driving video data;
the specific method for acquiring the speed and the steering angle of the vehicle in the driving video data comprises the following steps:
acquiring the speed and the steering angle of the vehicle through the GPS of the vehicle;
the method comprises the specific steps of obtaining the noise-reduced driving video data according to the position variation of any pixel point in the driving gray level image of the next frame and the moving direction of any pixel point in the driving gray level image, and storing the noise-reduced driving video data, and comprises the following specific steps:
the gray level image of any frame of driving is marked as a first image, the gray level image of the next frame of driving of the first image is marked as a second image, the gray level image of the next frame of driving of the second image is marked as a third image, and the following steps are carried outThe kth pixel point in the first image is marked asAccording to +.>In the position change of the second image and in the first image +.>Is moved in the first image to obtain +.>In the pixel point corresponding to the second image, marked as +.>Will->In the pixel point corresponding to the third image, it is marked +.>
Presetting the window size of non-local mean filtering as followsN is a preset value, in order to->The range formed for a center neighborhood radius R is denoted +.>To->The range formed for a center neighborhood radius R is denoted +.>To->The range formed for a center neighborhood radius R is denoted +.>R is a preset first value, will ∈>、/>Is->The range formed by sequential splicing is marked as S, wherein S is a rectangular area, and non-local average filtering is carried out according to a preset window of non-local average filtering and S, so that a gray value of a k pixel point in the first image after filtering is obtained;
and for any frame of driving gray image, obtaining a filtered driving gray image according to the gray value of each pixel point in the driving gray image, recording a video formed by the filtered driving gray image of each frame in the driving video data as driving video data after noise reduction, and storing the driving video data after noise reduction.
2. The intelligent data acquisition method based on the data characteristics according to claim 1, wherein the step of obtaining the center point of the field of view of the driving gray image according to the driving gray image comprises the following specific steps:
edge detection is carried out on the driving gray image by using a Canny operator to obtain a plurality of closed edges in the driving gray image, an angle threshold is preset, for any one closed edge, pixel points, of which the slope changes of continuous pixel points on the closed edge exceed the preset angle threshold, are used as base points, all base points on all the closed edges are used as segmentation points to segment all the closed edges, a plurality of edge segments are obtained, and the preset length is ensuredFor any edge segment, if the number of pixel points contained in the edge segment is greater than a preset length threshold, reserving the edge segment, extending all reserved edge segments to two sides of the edge segment, wherein the extending direction is the tangential direction of the end point of each edge segment, obtaining a plurality of intersection points, and inputting all the intersection pointsIn the clustering, the number parameter of the clustering center is preset>,/>And presetting a first parameter, wherein a central point of the clustering result is used as a visual field central point of the driving gray image.
3. The intelligent data acquisition method based on the data characteristics according to claim 1, wherein the establishing a rectangular coordinate system according to the driving gray level image comprises the following specific steps:
and taking the lower left corner of the driving gray image as an origin, taking the horizontal right as an x-axis and taking the vertical upward as a y-axis to establish a rectangular coordinate system.
4. The intelligent data acquisition method based on the data characteristics according to claim 1, wherein the step of obtaining the distance between any one pixel point in the driving gray image and the center point of the field of view according to the position information of any one pixel point in the driving gray image and the center point of the field of view of the driving gray image in the rectangular coordinate system comprises the following specific steps:
in the method, in the process of the invention,for the kth image in the gray-scale image of the drivingThe abscissa of the prime point,/->Is the ordinate of the kth pixel point in the gray level image of the driving vehicle,>is the abscissa of the center point of the field of view of the gray image of the driving, +.>Is the ordinate of the center point of the field of view of the gray image of the driving vehicle,/>The Euclidean distance between the kth pixel point and the center point of the visual field in the driving gray level image.
5. The intelligent data acquisition method based on the data characteristics according to claim 1, wherein the step of obtaining the corrected distance between any one pixel point in the driving gray level image and the center point of the field of view according to the distance between any one pixel point in the driving gray level image and the center point of the field of view and the steering angle of the vehicle comprises the following specific steps:
in the method, in the process of the invention,is the Euclidean distance between the kth pixel point and the center point of the visual field in the gray level image of the driving vehicle, and is->Is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the abscissa of the center point of the field of view of the gray image of the driving, +.>As an exponential function based on natural constants, < +.>For the steering angle of the vehicle +.>The correction distance between the kth pixel point and the center point of the visual field in the driving gray level image is obtained.
6. The intelligent data acquisition method based on the data characteristics according to claim 1, wherein the step of obtaining the position variation of the driving gray image of any one pixel point in the driving gray image in the next frame according to the corrected distance between any one pixel point in the driving gray image and the center point of the field of view and the speed of the vehicle comprises the following specific steps:
in the method, in the process of the invention,for the speed of the vehicle>Is a preset constant->For the correction distance between the kth pixel point and the center point of the visual field in the gray level image of the driving vehicle, +.>As an exponential function based on natural constants, < +.>Representing a linear normalization function, ++>The position variation of the kth pixel point in the driving gray image of the next frame is obtained.
7. The intelligent data acquisition method based on the data characteristics according to claim 1, wherein the step of obtaining the moving direction of any one pixel point in the driving gray image according to the vector formed by the center point of the field of view of the driving gray image and any one pixel point comprises the following specific steps:
in the method, in the process of the invention,a unit vector comprising a center point of a field of view of the gray-scale image of the vehicle as a start point and a kth pixel point of the gray-scale image of the vehicle as an end point +.>The unit vector is composed of the center point of the field of view of the driving gray image as the starting point and the center point of the driving gray image as the end point>For the steering angle of the vehicle +.>Is the abscissa of the kth pixel point in the gray level image of the driving vehicle, +.>Is the abscissa of the center point of the field of view of the gray image of the driving, +.>To take absolute value, +.>Representing a linear normalization function, ++>The moving direction of the kth pixel point in the driving gray level image is shown.
8. A data feature-based data intelligent acquisition system comprising a memory and a processor, wherein the processor executes a computer program stored in the memory to implement the steps of the data feature-based data intelligent acquisition method as claimed in any one of claims 1 to 7.
CN202311515009.1A 2023-11-15 2023-11-15 Data intelligent acquisition method and system based on data characteristics Active CN117237240B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311515009.1A CN117237240B (en) 2023-11-15 2023-11-15 Data intelligent acquisition method and system based on data characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311515009.1A CN117237240B (en) 2023-11-15 2023-11-15 Data intelligent acquisition method and system based on data characteristics

Publications (2)

Publication Number Publication Date
CN117237240A CN117237240A (en) 2023-12-15
CN117237240B true CN117237240B (en) 2024-02-02

Family

ID=89095301

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311515009.1A Active CN117237240B (en) 2023-11-15 2023-11-15 Data intelligent acquisition method and system based on data characteristics

Country Status (1)

Country Link
CN (1) CN117237240B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
KR20140080105A (en) * 2012-12-20 2014-06-30 울산대학교 산학협력단 Method for detecting lane boundary by visual information
CN107481526A (en) * 2017-09-07 2017-12-15 公安部第三研究所 System and method for drive a vehicle lane change detection record and lane change violating the regulations report control
CN111813114A (en) * 2020-07-07 2020-10-23 镇江市高等专科学校 Intelligent car visual navigation method
WO2023279966A1 (en) * 2021-07-08 2023-01-12 中移(上海)信息通信科技有限公司 Multi-lane-line detection method and apparatus, and detection device
CN116188328A (en) * 2023-04-24 2023-05-30 深圳市银河通信科技有限公司 Parking area response lamp linked system based on thing networking
WO2023174283A1 (en) * 2022-03-16 2023-09-21 华为技术有限公司 Anti-carsickness method, device, and system based on visual compensation image
CN116805409A (en) * 2023-06-25 2023-09-26 南京理工大学 Method for identifying road surface state and evaluating flatness by using driving video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104182957B (en) * 2013-05-21 2017-06-20 北大方正集团有限公司 Traffic video information detecting method and device
CN103778786B (en) * 2013-12-17 2016-04-27 东莞中国科学院云计算产业技术创新与育成中心 A kind of break in traffic rules and regulations detection method based on remarkable vehicle part model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102682455A (en) * 2012-05-10 2012-09-19 天津工业大学 Front vehicle detection method based on monocular vision
KR20140080105A (en) * 2012-12-20 2014-06-30 울산대학교 산학협력단 Method for detecting lane boundary by visual information
CN107481526A (en) * 2017-09-07 2017-12-15 公安部第三研究所 System and method for drive a vehicle lane change detection record and lane change violating the regulations report control
CN111813114A (en) * 2020-07-07 2020-10-23 镇江市高等专科学校 Intelligent car visual navigation method
WO2023279966A1 (en) * 2021-07-08 2023-01-12 中移(上海)信息通信科技有限公司 Multi-lane-line detection method and apparatus, and detection device
WO2023174283A1 (en) * 2022-03-16 2023-09-21 华为技术有限公司 Anti-carsickness method, device, and system based on visual compensation image
CN116188328A (en) * 2023-04-24 2023-05-30 深圳市银河通信科技有限公司 Parking area response lamp linked system based on thing networking
CN116805409A (en) * 2023-06-25 2023-09-26 南京理工大学 Method for identifying road surface state and evaluating flatness by using driving video

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
High-Resolution Vehicle Trajectory Extraction and Denoising From Aerial Videos;Xinqiang Chen et al.;《IEEE Transactions on Intelligent Transportation Systems》;第22卷;全文 *
图像增强算法在行车图像处理中的研究与应用;冯梦如;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;全文 *

Also Published As

Publication number Publication date
CN117237240A (en) 2023-12-15

Similar Documents

Publication Publication Date Title
CN110349207B (en) Visual positioning method in complex environment
CN107895375B (en) Complex road route extraction method based on visual multi-features
CN107169972B (en) Non-cooperative target rapid contour tracking method
CN109785291A (en) A kind of lane line self-adapting detecting method
JP4157620B2 (en) Moving object detection apparatus and method
CN109146832B (en) Video image splicing method and device, terminal equipment and storage medium
CN109376641B (en) Moving vehicle detection method based on unmanned aerial vehicle aerial video
CN109544635B (en) Camera automatic calibration method based on enumeration heuristic
CN110110608B (en) Forklift speed monitoring method and system based on vision under panoramic monitoring
CN114396945A (en) Unmanned ship edge cleaning path planning method, device, equipment and storage medium
CN111753693B (en) Target detection method under static scene
CN116912715A (en) Unmanned aerial vehicle vision servo control method and system for fan blade inspection
CN114241438B (en) Traffic signal lamp rapid and accurate identification method based on priori information
JP3775200B2 (en) Inter-vehicle distance estimation device
CN117237240B (en) Data intelligent acquisition method and system based on data characteristics
CN114943823B (en) Unmanned aerial vehicle image splicing method and system based on deep learning semantic perception
CN112069924A (en) Lane line detection method, lane line detection device and computer-readable storage medium
CN111428538B (en) Lane line extraction method, device and equipment
Bhupathi et al. An augmented sliding window technique to improve detection of curved lanes in autonomous vehicles
CN105741317A (en) Infrared moving target detection method based on time-space domain saliency analysis and sparse representation
CN115100615A (en) End-to-end lane line detection method based on deep learning
Rui Lane line detection technology based on machine vision
CN114926332A (en) Unmanned aerial vehicle panoramic image splicing method based on unmanned aerial vehicle mother vehicle
CN111428537B (en) Method, device and equipment for extracting edges of road diversion belt
CN111626180A (en) Lane line detection method and device based on polarization imaging

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant