CN112287951B - Data output method, device, medium and computing equipment based on image analysis - Google Patents

Data output method, device, medium and computing equipment based on image analysis Download PDF

Info

Publication number
CN112287951B
CN112287951B CN202011422905.XA CN202011422905A CN112287951B CN 112287951 B CN112287951 B CN 112287951B CN 202011422905 A CN202011422905 A CN 202011422905A CN 112287951 B CN112287951 B CN 112287951B
Authority
CN
China
Prior art keywords
data
fitting
output
feature point
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011422905.XA
Other languages
Chinese (zh)
Other versions
CN112287951A (en
Inventor
曾凡
易锐
李静
邰海军
柯钦瑜
黄勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuanwei Beijing Biotechnology Co ltd
First Affiliated Hospital of Zhengzhou University
Original Assignee
Xuanwei Beijing Biotechnology Co ltd
First Affiliated Hospital of Zhengzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuanwei Beijing Biotechnology Co ltd, First Affiliated Hospital of Zhengzhou University filed Critical Xuanwei Beijing Biotechnology Co ltd
Priority to CN202011422905.XA priority Critical patent/CN112287951B/en
Publication of CN112287951A publication Critical patent/CN112287951A/en
Application granted granted Critical
Publication of CN112287951B publication Critical patent/CN112287951B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion

Abstract

The embodiment of the invention provides a data output method, a data output device, a data output medium and a computing device based on image analysis. The data output based on image analysis comprises: acquiring a feature point coordinate array in an image to be processed; analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type; and outputting the data to be output through an output mode corresponding to the data type of the data to be output. The technology of the invention can output the data to be output of different data types in different modes, and can obviously distinguish the data to be output of different data types, so that personnel needing to acquire different data can more clearly and intuitively acquire the content contained in the image, and further improve the efficiency of image analysis and processing.

Description

Data output method, device, medium and computing equipment based on image analysis
Technical Field
The embodiment of the invention relates to the technical field of image processing, in particular to a data output method, a data output device, a data output medium and a computing device based on image analysis.
Background
This section is intended to provide a background or context to the embodiments of the invention that are recited in the claims. The description herein is not admitted to be prior art by inclusion in this section.
With the rapid development of image processing technology, more required information can be obtained from the image through the analysis processing of the image. Conventionally, a method of analyzing an image generally determines an object of image analysis processing, analyzes the image according to the object, and finally outputs an image analysis processing result corresponding to the object.
However, in practice, it is found that the prior art generally only outputs the finally obtained image analysis processing result, and as a result, the obtained processing result is generally one-sided, and a person using the processing result cannot know more contents contained in the image, so that the efficiency of the image analysis processing is low.
Disclosure of Invention
In this context, embodiments of the present invention are intended to provide a data output method, apparatus, medium, and computing device based on image analysis.
In a first aspect of embodiments of the present invention, there is provided a data output method based on image analysis, including:
acquiring a feature point coordinate array in an image to be processed;
analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type;
and outputting the data to be output through an output mode corresponding to the data type of the data to be output.
In an embodiment of the present invention, acquiring a feature point coordinate array in an image to be processed includes:
preprocessing an image to be processed to obtain a result image corresponding to the image to be processed;
and processing the result image through a feature extraction algorithm to obtain a feature point coordinate array corresponding to the result image.
In an embodiment of the present invention, the analyzing the feature point coordinate array according to a preset manner to obtain data to be output corresponding to the feature point coordinate array includes:
segmenting the result image according to a preset mode to obtain a plurality of sub-matrixes corresponding to the result image;
acquiring a feature point coordinate sub-array corresponding to each sub-matrix from the feature point coordinate array;
calculating a plurality of feature point coordinate subarrays to obtain feature point center points corresponding to the sub-matrixes, and inserting the feature point center points into the sub-matrix feature point array queues corresponding to the feature point center points;
fitting each sub-matrix characteristic point array queue to obtain fitting data of each sub-matrix characteristic point array queue;
and determining the characteristic point coordinate array, a plurality of characteristic point coordinate sub-arrays, a plurality of characteristic point central points and a plurality of fitting data as data to be output corresponding to the characteristic point coordinate array, wherein the characteristic point coordinate array is data to be output of a characteristic point array type, the characteristic point coordinate sub-array is data to be output of a characteristic point sub-array type, the characteristic point central point is data to be output of a central point type, and the fitting data is data to be output of a fitting type.
In an embodiment of the present invention, the segmenting the result image according to a preset manner to obtain a plurality of sub-matrices corresponding to the result image includes:
converting the result image into a tensor matrix, and acquiring the matrix width and the matrix height of the tensor matrix;
and dividing the tensor matrix based on the matrix width, the matrix height and a preset mode to obtain a plurality of sub-matrices corresponding to the tensor matrix.
In an embodiment of the present invention, the fitting the sub-matrix feature point array queues respectively to obtain fitting data of the sub-matrix feature point array queues includes:
fitting each sub-matrix characteristic point array queue through a second-order fitting equation to obtain a fitting equation of each sub-matrix characteristic point array queue;
inserting each fitting equation into a fitting queue corresponding to the fitting equation, wherein the fitting queues are in one-to-one correspondence with the sub-matrix characteristic point array queues;
acquiring a characteristic point mean vector from the sub-matrix characteristic point array queue, acquiring a fitting vector from the fitting queue, and calculating according to the characteristic point mean vector and the fitting vector to obtain a mean vector L2 norm and a fitting vector L2 norm;
determining a plurality of the fitting equations, the mean vector L2 norm, and the fitting vector L2 norm as fitting data.
In an embodiment of the present invention, acquiring a feature point mean vector from the sub-matrix feature point array queue, acquiring a fitting vector from the fitting queue, and calculating a mean vector L2 norm and a fitting vector L2 norm according to the feature point mean vector and the fitting vector includes:
acquiring a characteristic point mean vector from the sub-matrix characteristic point array queue, and storing the characteristic point mean vector to a first temporary vector;
acquiring a fitting vector from the fitting queue, and storing the fitting vector to a second temporary vector;
and calculating the first temporary vector and the second temporary vector according to a preset norm calculation formula to obtain a mean vector L2 norm and a fitting vector L2 norm.
In a second aspect of embodiments of the present invention, there is provided an image analysis-based data output apparatus including:
the acquiring unit is used for acquiring a feature point coordinate array in the image to be processed;
the analysis unit is used for analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type;
and the output unit is used for outputting the data to be output through an output mode corresponding to the data type of the data to be output.
In an embodiment of the present embodiment, the acquiring unit includes:
the processing subunit is used for preprocessing the image to be processed to obtain a result image corresponding to the image to be processed;
and the processing subunit is further configured to process the result image through a feature extraction algorithm to obtain a feature point coordinate array corresponding to the result image.
In one embodiment of this embodiment, the analysis unit includes:
the segmentation subunit is used for segmenting the result image according to a preset mode to obtain a plurality of sub-matrixes corresponding to the result image;
the obtaining subunit is used for obtaining a feature point coordinate subarray corresponding to each submatrix from the feature point coordinate subarrays;
the calculation subunit is used for calculating a plurality of feature point coordinate subarrays to obtain feature point central points corresponding to the sub-matrixes, and inserting the feature point central points into the sub-matrix feature point array queues corresponding to the feature point central points;
the fitting subunit is used for respectively fitting each sub-matrix characteristic point array queue to obtain fitting data of each sub-matrix characteristic point array queue;
the determining subunit is configured to determine the feature point coordinate array, a plurality of feature point coordinate subarrays, a plurality of feature point central points, and a plurality of fitting data as data to be output corresponding to the feature point coordinate array, where the feature point coordinate array is data to be output of a feature point array type, the feature point coordinate subarray is data to be output of a feature point subarray type, the feature point central point is data to be output of a central point type, and the fitting data is data to be output of a fitting type.
In one embodiment of this embodiment, the partitioning subunit includes:
the conversion module is used for converting the result image into a tensor matrix and acquiring the matrix width and the matrix height of the tensor matrix;
and the dividing module is used for dividing the tensor matrix based on the matrix width, the matrix height and a preset mode to obtain a plurality of sub-matrices corresponding to the tensor matrix.
In an embodiment of this embodiment, the fitting subunit includes:
the fitting module is used for respectively fitting each sub-matrix characteristic point array queue through a second-order fitting equation to obtain a fitting equation of each sub-matrix characteristic point array queue;
the inserting module is used for inserting each fitting equation into the corresponding fitting queue, and the fitting queues are in one-to-one correspondence with the sub-matrix characteristic point array queues;
the obtaining module is used for obtaining a characteristic point mean vector from the sub-matrix characteristic point array queue, obtaining a fitting vector from the fitting queue, and calculating according to the characteristic point mean vector and the fitting vector to obtain a mean vector L2 norm and a fitting vector L2 norm;
a determination module for determining a plurality of the fitting equations, the mean vector L2 norm, and the fitting vector L2 norm as fitting data.
In an embodiment of this embodiment, the obtaining module includes:
the storage submodule is used for acquiring a characteristic point mean vector from the submatrix characteristic point array queue and storing the characteristic point mean vector to a first temporary vector;
the storage sub-module is further configured to obtain a fitting vector from the fitting queue and store the fitting vector to a second temporary vector;
and the calculation submodule is used for calculating the first temporary vector and the second temporary vector according to a preset norm calculation formula to obtain a mean vector L2 norm and a fitting vector L2 norm.
In a third aspect of embodiments of the present invention, there is provided a computer-readable storage medium storing a computer program enabling, when executed by a processor, the method of any one of the first aspect.
In a fourth aspect of embodiments of the present invention, there is provided a computing device comprising a storage medium as described above.
According to the data output method, the data output device, the data output medium and the computing equipment based on the image analysis, the characteristic point coordinate array of the image to be processed can be determined from the image to be processed, the coordinate array to be processed is further analyzed to obtain data to be output of various different data types, the data to be output of different data types are output in different modes, the data to be output of different data types can be obviously distinguished, so that people needing to obtain different data can obtain the content contained in the image more clearly and more intuitively, and the efficiency of image analysis and processing is further improved.
Drawings
The above and other objects, features and advantages of exemplary embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
fig. 1 is a schematic flowchart of a data output method based on image analysis according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart illustrating a data output method based on image analysis according to another embodiment of the present invention;
fig. 3 is a schematic structural diagram of a data output method based on image analysis according to an embodiment of the present invention;
FIG. 4 schematically shows a schematic of the structure of a medium according to an embodiment of the invention;
fig. 5 schematically shows a structural diagram of a computing device according to an embodiment of the present invention.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
As will be appreciated by one skilled in the art, embodiments of the present invention may be embodied as a system, apparatus, device, method, or computer program product. Accordingly, the present disclosure may be embodied in the form of: entirely hardware, entirely software (including firmware, resident software, micro-code, etc.), or a combination of hardware and software.
According to an embodiment of the invention, a data output method, a data output device, a data output medium and a computing device based on image analysis are provided.
In this document, it is to be understood that any number of elements in the figures are provided by way of illustration and not limitation, and any nomenclature is used for differentiation only and not in any limiting sense.
The principles and spirit of the present invention are explained in detail below with reference to several representative embodiments of the invention.
Exemplary method
Referring to fig. 1, fig. 1 is a schematic flowchart of a data output method based on image analysis according to an embodiment of the present invention. It should be noted that the embodiments of the present invention can be applied to any applicable scenarios.
Fig. 1 shows a flowchart 100 of a data output method based on image analysis according to an embodiment of the present invention, which includes:
step S110, acquiring a feature point coordinate array in an image to be processed;
step S120, analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type;
step S130, outputting the data to be output in an output mode corresponding to the data type of the data to be output.
The data output method based on image analysis provided by the application aims at data to be output of multiple data types generated in a calculation process of an image acquired by image acquisition equipment in a use scene of the image acquisition equipment based on an image recognition technology, and outputting the data to be output of the multiple data types in different modes. The image acquisition equipment can be carried on terminal equipment such as monitoring equipment, robots, automobiles, unmanned aerial vehicles and submersibles.
The technology of the invention can output the data to be output of different data types in different modes, and can obviously distinguish the data to be output of different data types, so that personnel needing to acquire different data can more clearly and intuitively acquire the content contained in the image, and further improve the efficiency of image analysis and processing.
The following explains how to obviously distinguish data to be output of different data types in combination with the accompanying drawings so that a person who needs to acquire different data can acquire contents contained in an image more clearly and intuitively, and further the efficiency of image analysis and processing is improved:
first, the image to be processed may be an image acquired by the same image acquisition device, and the image acquisition device may be a camera, a probe, an endoscope, or the like, which is not limited in the embodiments of the present invention.
In addition, feature point extraction may be performed on an image to be processed to obtain a feature point coordinate array of the image to be processed, a coordinate system may be set based on the image to be processed, when a feature point of the image to be processed is extracted, a feature point coordinate corresponding to the feature point is determined based on the coordinate system, in general, one or more feature points may be extracted from the image to be processed, when a plurality of feature points exist in the image to be processed, a feature point coordinate corresponding to each feature point may be determined from the coordinate system, and feature point coordinates corresponding to all feature points extracted from the image to be processed may be added to the feature point coordinate array, that is, one image to be processed may correspond to one feature point coordinate array, and one feature point coordinate array may include one or more feature point coordinates.
In the embodiment of the present invention, the preset manner for analyzing the feature point coordinate array may be any algorithm, different preset manners may be used for analyzing the feature point coordinate array according to different results to be obtained, and intermediate data of multiple data types may be generated in the process of analyzing the feature point coordinate array, so that both the result obtained by analyzing the feature point coordinate array and the generated intermediate data may be determined as data to be output, and data to be output generated at different analysis stages may be determined as different data types, and one data type may correspond to one or more data to be output.
In addition, different data types may correspond to different output modes, and the different output modes of the different data types may be preset output modes, that is, the data types to be generated may be predicted before the feature point coordinate array is analyzed, and the output modes corresponding to each data type may be preset, and the output modes corresponding to each data type are different.
Furthermore, only data to be output corresponding to one data type may be output, data to be output corresponding to multiple data types may also be output, and data to be output corresponding to all data types may also be output, which is not limited in the embodiment of the present invention. In addition, the data type of the output data to be output can be determined through a received output instruction, the output instruction may include one or more data types to be output, and the output instruction may be user input required to acquire the data to be output.
For example, the data to be outputted may include three data types, such as data type a, data type b and data type c, the data to be output can be output to a preset coordinate system, the output mode corresponding to the data type a can be a circular mark, the output mode corresponding to the data type b can be a rectangular mark, the output mode corresponding to the data type c can be a triangular mark, therefore, the data to be output corresponding to the data type a can be output by a circular mark in a preset coordinate system, the data to be output corresponding to the data type b can be output by a rectangular mark in the preset coordinate system, the data to be output corresponding to the data type c can be output by a triangular mark in the preset coordinate system, therefore, the data to be output corresponding to the three different data types can be output in the coordinate system very intuitively.
Optionally, the output mode of the data to be output of different data types may also be to output the data to be output by different colors, for example, three data types included in the data to be output: the data type comprises a data type a, a data type b and a data type c, wherein data to be output corresponding to the data type a can be output in a first color, data to be output corresponding to the data type b can be output in a second color, data to be output corresponding to the data type c can be output in a third color, the first color, the second color and the third color can be any color (such as red, yellow or green), and any two colors of the first color, the second color and the third color are different, so that the data to be output corresponding to the three output data types are more intuitive.
In addition, if the data to be output has the vector type data, the output mode of the vector type data to be output can also represent the data to be output through a mark (such as an arrow and the like) capable of indicating the direction, and in addition, the output mode of the data to be output can also be determined simultaneously based on different shapes, different colors and/or marks capable of indicating the direction, so that the diversity of the output modes is improved. For example, the output modes corresponding to different data types may be marks with the same color but different shapes, marks with different colors but the same shape, or marks with different colors and different shapes, which is not limited in the embodiment of the present invention.
Referring to fig. 2, fig. 2 is a schematic flow chart of a data output method based on image analysis according to another embodiment of the present invention, and a flow chart 200 of the data output method based on image analysis according to another embodiment of the present invention shown in fig. 2 includes:
step S210, preprocessing an image to be processed to obtain a result image corresponding to the image to be processed;
step S220, processing the result image through a feature extraction algorithm to obtain a feature point coordinate array corresponding to the result image;
by implementing the steps S210 to S220, the image to be processed can be preprocessed, so that the feature point coordinate array corresponding to the image to be processed is obtained, and the data processing mode of the image to be processed is simplified.
The mode of preprocessing the image to be processed may specifically be: the image to be processed may be convolved based on a preset convolution kernel to obtain a result image corresponding to the image to be processed, where the preset convolution kernel may be determined according to a second order differential convolution kernel, and the preset convolution kernel may be set with reference to a second order differential convolution kernel of laplacian [ [1, 1, 1], [1, -8, 1], [1, 1, 1] ], for example, the preset convolution kernel may be [ [1, 1, 1], [1, -9, 1], [1, 1, 1] ] or [ [1, 1, 1], [1, -10, 1], [1, 1, 1] ], and an operator may select a parameter with a best convolution effect by testing different parameters, and determine the preset convolution kernel according to the parameter.
In addition, the feature extraction algorithm may be an organized FAST and Rotated BRIEF (ORB) algorithm, which may be used to compute the resulting image and obtain an array of feature point coordinates for the resulting image. For example, based on a preset coordinate system, the coordinates K of the feature points of the result image in the result image may be determinedp= (height, width), and a feature point coordinate array K generated from a plurality of feature point coordinates included in the result image may be determinedps=[Kp1,Kp2,…,Kpn]。
Further, the Feature extraction algorithm may also be a Scale-Invariant Feature Transform (SIFT) algorithm, a Speeded Up Robust Features (SURF) algorithm, a Harris corner extraction algorithm, a Feature point detection (FAST) algorithm, and the like, which are not limited in this embodiment.
Step S230, segmenting the result image according to a preset mode to obtain a plurality of sub-matrixes corresponding to the result image;
step S240, obtaining a feature point coordinate sub-array corresponding to each sub-matrix from the feature point coordinate array;
step S250, calculating a plurality of feature point coordinate subarrays to obtain feature point central points corresponding to the sub-matrixes, and inserting the feature point central points into the sub-matrix feature point array queues corresponding to the feature point central points;
step S260, fitting each sub-matrix characteristic point array queue to obtain fitting data of each sub-matrix characteristic point array queue;
step S270, determining the feature point coordinate array, a plurality of feature point coordinate sub-arrays, a plurality of feature point central points and a plurality of fitting data as data to be output corresponding to the feature point coordinate array, wherein the feature point coordinate array is data to be output of a feature point array type, the feature point coordinate sub-array is data to be output of a feature point sub-array type, the feature point central points are data to be output of a central point type, and the fitting data is data to be output of a fitting type.
By implementing the steps S230 to S270, the image can be segmented, and then the data to be output of various data types can be calculated based on the plurality of sub-matrices and the feature point coordinate arrays corresponding to the segmented image, so that the diversity of the data to be output is improved.
As an optional implementation manner, in step S230, the step of segmenting the result image according to a preset manner to obtain a plurality of sub-matrices corresponding to the result image may specifically include the following steps:
converting the result image into a tensor matrix, and acquiring the matrix width and the matrix height of the tensor matrix;
and dividing the tensor matrix based on the matrix width, the matrix height and a preset mode to obtain a plurality of sub-matrices corresponding to the tensor matrix.
By implementing the implementation mode, the result image can be converted into a tensor matrix, the matrix width and the matrix height of the tensor matrix are further acquired, then the tensor matrix is segmented based on the matrix width and the matrix height, and because the width and the height of the tensor image corresponding to the image acquired by the same terminal device can be the same, the formats of a plurality of sub-matrices segmented by each image can be the same based on the matrix width and the matrix height of the tensor matrix, so that the consistency of the plurality of sub-matrices corresponding to different images is ensured.
The tensor matrix into which the result image is converted may be M', and since the images to be processed to be analyzed may be acquired by the same image acquisition device, the size of each image to be processed may be the same, that is, the matrix width and the matrix height of the tensor matrix corresponding to each image to be processed may also be the same. Therefore, the matrix width w and the matrix height h can be obtained from the tensor matrix.
Further, the manner of segmenting the tensor matrix based on the matrix width, the matrix height and the preset manner to obtain the plurality of sub-matrices corresponding to the tensor matrix may specifically be:
two straight lines line1 and line2 can be obtained by calculation according to a preset mode based on the matrix width w and the matrix height h, and the preset mode for calculating the lines 1 and 2 can be as follows: line1= ((0, w/2), (h, w/2)) and line2= ((h/2, 0), (h/2, w)), and the tensor matrix corresponding to the image to be processed can be divided into four sub-matrices:
Figure 408398DEST_PATH_IMAGE001
wherein A is1The left upper corner part of the tensor matrix is the left upper corner part of the image to be processed; a. the2The image processing method comprises the steps of (1) forming an upper right corner part of a tensor matrix, namely the upper right corner part of an image to be processed; a. the3The left lower part of the tensor matrix is the left lower part of the image to be processed; a. the4Is the lower right part of the tensor matrix, namely the lower right part of the image to be processed; then, the feature point coordinate array K corresponding to the image to be processed can be obtainedpsRespectively obtaining feature point coordinate subarrays A corresponding to the four areas1The corresponding feature point coordinate sub-array can be A1Kps,A2The corresponding feature point coordinate sub-array can be A2Kps,A3The corresponding feature point coordinate sub-array can be A3Kps,A4The corresponding feature point coordinate sub-array can be A4Kps. In addition, the feature point coordinate subarrays of different areas of the image to be processed may be inserted into corresponding queues for storage, for example, a of the image to be processed may be stored1KpsInsertion queue Q1And A is2KpsInsertion queue Q2And A is3KpsInsertion queue Q3And A is4KpsInsertion queue Q4The insertion mode may be a First In First Out (FIFO) mode.
In addition, feature point center points corresponding to each sub-matrix region also need to be calculated, and since the number of feature points of each sub-region is large, feature point center points between feature points of each sub-matrix need to be calculated respectively, and the preset center point calculation formula may be:
Figure 591118DEST_PATH_IMAGE002
wherein when n =1, the calculated P1Avg can be defined as the submatrix A1Coordinates of characteristic points ofArray A1KpsThe characteristic point center point of (1); when n =2, the calculated P2Avg can be defined as the submatrix A2Feature point coordinate sub-array A of2KpsThe characteristic point center point of (1); when n =3, the calculated P3Avg can be defined as the submatrix A3Feature point coordinate sub-array A of3KpsThe characteristic point center point of (1); when n =4, the calculated P4Avg can be defined as the submatrix A4Feature point coordinate sub-array A of4KpsThe feature point center point of (2).
Further, the sub-matrix A of the image to be processed may be divided into1Feature point center point insertion of (1) and (A)1QAvg of corresponding sub-matrix feature point array1=[P1Avg1,P2Avg2,…,PnAvgn]Repeating the above steps to obtain the product A2QAvg of corresponding sub-matrix feature point array2And obtaining a3QAvg of corresponding sub-matrix feature point array3And obtaining a4QAvg of corresponding sub-matrix feature point array4
As an optional implementation manner, in step S260, the manner of respectively fitting each sub-matrix feature point array queue to obtain fitting data of each sub-matrix feature point array queue may specifically include the following steps:
fitting each sub-matrix characteristic point array queue through a second-order fitting equation to obtain a fitting equation of each sub-matrix characteristic point array queue;
inserting each fitting equation into a fitting queue corresponding to the fitting equation, wherein the fitting queues are in one-to-one correspondence with the sub-matrix characteristic point array queues;
acquiring a characteristic point mean vector from the sub-matrix characteristic point array queue, acquiring a fitting vector from the fitting queue, and calculating according to the characteristic point mean vector and the fitting vector to obtain a mean vector L2 norm and a fitting vector L2 norm;
determining a plurality of the fitting equations, the mean vector L2 norm, and the fitting vector L2 norm as fitting data.
By implementing the implementation mode, the submatrix characteristic point array can be fitted through a second-order fitting equation, so that data such as a fitting equation, a mean vector L2 norm, a fitting vector L2 norm and the like are obtained, and the fitting equation, the mean vector L2 norm and the fitting vector L2 norm are determined as fitting data, so that the diversity of the fitting data is improved.
Wherein, because the distribution of the feature point central point in the image to be processed is not uniform, so as to make the observed moving direction of the feature point central point disordered, a second-order fitting equation (such as a 2-order linear fitting equation based on least square) can be adopted to respectively form the sub-matrix feature point array QAVg1、QAvg2、QAvg3And Qavg4Fitting is carried out to obtain a fitting result, so that linear regression of the central points of the discrete characteristic points is realized, the obtained fitting result can be a linear equation, and the fitting results can be inserted into different fitting queues QFAVvg in a one-to-one correspondence mode1、QFAvg2、QFAvg3And QFAVg4
Further, the method of obtaining the feature point mean vector from the sub-matrix feature point array queue, obtaining the fitting vector from the fitting queue, and calculating the mean vector L2 norm and the fitting vector L2 norm according to the feature point mean vector and the fitting vector may specifically include the following steps:
acquiring a characteristic point mean vector from the sub-matrix characteristic point array queue, and storing the characteristic point mean vector to a first temporary vector;
acquiring a fitting vector from the fitting queue, and storing the fitting vector to a second temporary vector;
and calculating the first temporary vector and the second temporary vector according to a preset norm calculation formula to obtain a mean vector L2 norm and a fitting vector L2 norm.
By implementing the implementation mode, the first temporary vector obtained from the characteristic queue of the submatrix and the second temporary vector obtained from the fitting queue can be calculated through the norm calculation formula to obtain the norm of the mean vector L2 and the norm of the fitting vector L2, so that the calculation accuracy of the norm of the mean vector L2 and the norm of the fitting vector L2 is improved.
Wherein, QAvg can be selected from array queue of feature points of multiple submatrices1、QAvg2、QAvg3And Qavg4The mean value vector of the feature points is obtained and stored to the first temporary vector Arr1、Arr2、Arr3And Arr4And the sub-matrix characteristic point array queue corresponds to the first temporary vector one by one. In addition, from the fitting queue QFAVvg1、QFAvg2、QFAvg3And QFAVg4Obtaining a fitting vector and storing the fitting vector to a second temporary vector ArrF1、ArrF2、ArrF3And ArrF4And the fitting queue corresponds to the second temporary vector one by one.
Further, the formula for calculating the mean vector L2 norm and the fitting vector L2 norm based on the feature point mean vector and the fitting vector can be (by sub-matrix feature point array queue Qavg)1Examples) are:
Figure 839696DEST_PATH_IMAGE003
wherein p =2 since the L2 norm distance is calculated; since the distance of the starting coordinates of the mean feature points is calculated, n = 2. The norm L of the mean vector L2 can be calculated according to the formula1、L2、L3And L4Also, a fitting vector L2 norm LF can be obtained1、LF2、LF3And LF4
Further, the feature point coordinate array, the feature point coordinate sub-arrays KpsA plurality of characteristic point center points (P)1Avg、P2Avg、P3Avg and P4Avg) and a plurality ofFitting data (fitting equation, mean vector L2 norm L1、L2、L3And L4Fitting vector L2 norm LF1、LF2、LF3And LF4) Determining data to be output corresponding to the feature point coordinate array, wherein the feature point coordinate array can be data to be output of a feature point array type, the feature point coordinate subarray can be data to be output of a feature point subarray type, the feature point central point can be data to be output of a central point type, and the fitting data can be data to be output of a fitting type.
Step S280, outputting the data to be output in an output mode corresponding to the data type of the data to be output.
According to the technology, the data to be output of different data types can be obviously distinguished, so that people needing to acquire different data can acquire the content contained in the image more clearly and intuitively, and the efficiency of image analysis and processing is improved; the data processing mode of the image to be processed can be simplified; the diversity of data to be output can be improved; the consistency of a plurality of sub-matrixes corresponding to different images can be ensured; the diversity of the fitting data can be improved; the accuracy of the calculation of the norm of the mean vector L2 and the norm of the fitting vector L2 can also be improved.
Exemplary devices
Having described the method of an exemplary embodiment of the present invention, next, a data output apparatus based on image analysis of an exemplary embodiment of the present invention will be described with reference to fig. 3, the apparatus including:
an obtaining unit 310, configured to obtain a feature point coordinate array in an image to be processed;
the analysis unit 320 is configured to analyze the feature point coordinate array according to a preset manner to obtain data to be output corresponding to the feature point coordinate array, where the data to be output at least includes data of one data type;
the output unit 330 is configured to output the data to be output in an output mode corresponding to the data type of the data to be output.
The technology of the invention can output the data to be output of different data types in different modes, and can obviously distinguish the data to be output of different data types, so that personnel needing to acquire different data can more clearly and intuitively acquire the content contained in the image, and further improve the efficiency of image analysis and processing.
As an alternative implementation, the obtaining unit 310 of the apparatus may include:
the processing subunit is used for preprocessing the image to be processed to obtain a result image corresponding to the image to be processed;
and the processing subunit is further configured to process the result image through a feature extraction algorithm to obtain a feature point coordinate array corresponding to the result image.
By implementing the implementation mode, the image to be processed can be preprocessed, so that the feature point coordinate array corresponding to the image to be processed is obtained, and the data processing mode of the image to be processed is simplified.
As an alternative embodiment, the analyzing unit 320 of the apparatus may include:
the segmentation subunit is used for segmenting the result image according to a preset mode to obtain a plurality of sub-matrixes corresponding to the result image;
the obtaining subunit is used for obtaining a feature point coordinate subarray corresponding to each submatrix from the feature point coordinate subarrays;
the calculation subunit is used for calculating a plurality of feature point coordinate subarrays to obtain feature point central points corresponding to the sub-matrixes, and inserting the feature point central points into the sub-matrix feature point array queues corresponding to the feature point central points;
the fitting subunit is used for respectively fitting each sub-matrix characteristic point array queue to obtain fitting data of each sub-matrix characteristic point array queue;
the determining subunit is configured to determine the feature point coordinate array, a plurality of feature point coordinate subarrays, a plurality of feature point central points, and a plurality of fitting data as data to be output corresponding to the feature point coordinate array, where the feature point coordinate array is data to be output of a feature point array type, the feature point coordinate subarray is data to be output of a feature point subarray type, the feature point central point is data to be output of a central point type, and the fitting data is data to be output of a fitting type.
By the implementation of the implementation mode, the image can be segmented, and then the data to be output of various data types can be calculated and obtained based on the plurality of sub-matrixes and the feature point coordinate arrays corresponding to the segmented image, so that the diversity of the data to be output is improved.
As an alternative embodiment, the partitioning sub-unit of the apparatus may comprise:
the conversion module is used for converting the result image into a tensor matrix and acquiring the matrix width and the matrix height of the tensor matrix;
and the dividing module is used for dividing the tensor matrix based on the matrix width, the matrix height and a preset mode to obtain a plurality of sub-matrices corresponding to the tensor matrix.
By implementing the implementation mode, the result image can be converted into a tensor matrix, the matrix width and the matrix height of the tensor matrix are further acquired, then the tensor matrix is segmented based on the matrix width and the matrix height, and because the width and the height of the tensor image corresponding to the image acquired by the same terminal device can be the same, the formats of a plurality of sub-matrices segmented by each image can be the same based on the matrix width and the matrix height of the tensor matrix, so that the consistency of the plurality of sub-matrices corresponding to different images is ensured.
As an alternative embodiment, the fitting subunit of the apparatus may include:
the fitting module is used for respectively fitting each sub-matrix characteristic point array queue through a second-order fitting equation to obtain a fitting equation of each sub-matrix characteristic point array queue;
the inserting module is used for inserting each fitting equation into the corresponding fitting queue, and the fitting queues are in one-to-one correspondence with the sub-matrix characteristic point array queues;
the obtaining module is used for obtaining a characteristic point mean vector from the sub-matrix characteristic point array queue, obtaining a fitting vector from the fitting queue, and calculating according to the characteristic point mean vector and the fitting vector to obtain a mean vector L2 norm and a fitting vector L2 norm;
a determination module for determining a plurality of the fitting equations, the mean vector L2 norm, and the fitting vector L2 norm as fitting data.
By implementing the implementation mode, the submatrix characteristic point array can be fitted through a second-order fitting equation, so that data such as a fitting equation, a mean vector L2 norm, a fitting vector L2 norm and the like are obtained, and the fitting equation, the mean vector L2 norm and the fitting vector L2 norm are determined as fitting data, so that the diversity of the fitting data is improved.
As an alternative implementation, the obtaining module of the apparatus may include:
the storage submodule is used for acquiring a characteristic point mean vector from the submatrix characteristic point array queue and storing the characteristic point mean vector to a first temporary vector;
the storage sub-module is further configured to obtain a fitting vector from the fitting queue and store the fitting vector to a second temporary vector;
and the calculation submodule is used for calculating the first temporary vector and the second temporary vector according to a preset norm calculation formula to obtain a mean vector L2 norm and a fitting vector L2 norm.
By implementing the implementation mode, the first temporary vector obtained from the characteristic queue of the submatrix and the second temporary vector obtained from the fitting queue can be calculated through the norm calculation formula to obtain the norm of the mean vector L2 and the norm of the fitting vector L2, so that the calculation accuracy of the norm of the mean vector L2 and the norm of the fitting vector L2 is improved.
Exemplary Medium
Having described the method and apparatus of the exemplary embodiments of the present invention, next, a computer-readable storage medium of the exemplary embodiments of the present invention is described with reference to fig. 4, please refer to fig. 4, which illustrates a computer-readable storage medium being an optical disc 40 having stored thereon a computer program (i.e., a program product), which, when executed by a processor, implements the steps described in the above-mentioned method embodiments, for example, acquiring a feature point coordinate array in an image to be processed; analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type; outputting the data to be output through an output mode corresponding to the data type of the data to be output; the specific implementation of each step is not repeated here.
It should be noted that examples of the computer-readable storage medium may also include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory, or other optical and magnetic storage media, which are not described in detail herein.
Exemplary computing device
Having described the method, medium, and apparatus of exemplary embodiments of the present invention, a computing device for data output based on image analysis of exemplary embodiments of the present invention is next described with reference to fig. 5.
FIG. 5 illustrates a block diagram of an exemplary computing device 50 suitable for use in implementing embodiments of the present invention, the computing device 50 may be a computer system or server. The computing device 50 shown in FIG. 5 is only one example and should not be taken to limit the scope of use and functionality of embodiments of the present invention.
As shown in fig. 5, components of computing device 50 may include, but are not limited to: one or more processors or processing units 501, a system memory 502, and a bus 503 that couples the various system components (including the system memory 502 and the processing unit 501).
Computing device 50 typically includes a variety of computer system readable media. Such media may be any available media that is accessible by computing device 50 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 502 may include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM) 5021 and/or cache memory 5022. Computing device 50 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, the ROM5023 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, which is commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to the bus 503 by one or more data media interfaces. At least one program product may be included in system memory 502 having a set (e.g., at least one) of program modules configured to carry out the functions of embodiments of the invention.
A program/utility 5025 having a set (at least one) of program modules 5024 may be stored in, for example, system memory 502, and such program modules 5024 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment. The program modules 5024 generally perform the functions and/or methodologies of the described embodiments of the invention.
Computing device 50 may also communicate with one or more external devices 504 (e.g., keyboard, pointing device, display, etc.). Such communication may occur via input/output (I/O) interfaces 605. Moreover, computing device 50 may also communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via network adapter 506. As shown in FIG. 5, network adapter 506 communicates with other modules of computing device 50, such as processing unit 501, via bus 503. It should be appreciated that although not shown in FIG. 5, other hardware and/or software modules may be used in conjunction with computing device 50.
The processing unit 501 executes various functional applications and data processing, for example, acquiring a feature point coordinate array in an image to be processed, by running a program stored in the system memory 502; analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type; and outputting the data to be output through an output mode corresponding to the data type of the data to be output. The specific implementation of each step is not repeated here. It should be noted that although in the above detailed description several units/modules or sub-units/sub-modules of the data output device based on image analysis are mentioned, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more of the units/modules described above may be embodied in one unit/module according to embodiments of the invention. Conversely, the features and functions of one unit/module described above may be further divided into embodiments by a plurality of units/modules.
In the description of the present invention, it should be noted that the terms "first", "second", and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.
Moreover, while the operations of the method of the invention are depicted in the drawings in a particular order, this does not require or imply that the operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.

Claims (10)

1. A data output method based on image analysis, comprising:
acquiring a feature point coordinate array in an image to be processed;
analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type;
outputting the data to be output through an output mode corresponding to the data type of the data to be output;
the method for acquiring the feature point coordinate array in the image to be processed comprises the following steps:
preprocessing an image to be processed to obtain a result image corresponding to the image to be processed;
processing the result image through a feature extraction algorithm to obtain a feature point coordinate array corresponding to the result image;
analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the method comprises the following steps:
segmenting the result image according to a preset mode to obtain a plurality of sub-matrixes corresponding to the result image;
acquiring a feature point coordinate sub-array corresponding to each sub-matrix from the feature point coordinate array;
calculating a plurality of feature point coordinate subarrays to obtain feature point center points corresponding to the sub-matrixes, and inserting the feature point center points into the sub-matrix feature point array queues corresponding to the feature point center points;
fitting each sub-matrix characteristic point array queue to obtain fitting data of each sub-matrix characteristic point array queue;
and determining the characteristic point coordinate array, a plurality of characteristic point coordinate sub-arrays, a plurality of characteristic point central points and a plurality of fitting data as data to be output corresponding to the characteristic point coordinate array, wherein the characteristic point coordinate array is data to be output of a characteristic point array type, the characteristic point coordinate sub-array is data to be output of a characteristic point sub-array type, the characteristic point central point is data to be output of a central point type, and the fitting data is data to be output of a fitting type.
2. The data output method based on image analysis according to claim 1, wherein the segmenting the result image according to a preset mode to obtain a plurality of sub-matrices corresponding to the result image comprises:
converting the result image into a tensor matrix, and acquiring the matrix width and the matrix height of the tensor matrix;
and dividing the tensor matrix based on the matrix width, the matrix height and a preset mode to obtain a plurality of sub-matrices corresponding to the tensor matrix.
3. The data output method based on image analysis according to claim 1 or 2, fitting each sub-matrix feature point array queue to obtain fitting data of each sub-matrix feature point array queue, including:
fitting each sub-matrix characteristic point array queue through a second-order fitting equation to obtain a fitting equation of each sub-matrix characteristic point array queue;
inserting each fitting equation into a fitting queue corresponding to the fitting equation, wherein the fitting queues are in one-to-one correspondence with the sub-matrix characteristic point array queues;
acquiring a characteristic point mean vector from the sub-matrix characteristic point array queue, acquiring a fitting vector from the fitting queue, and calculating according to the characteristic point mean vector and the fitting vector to obtain a mean vector L2 norm and a fitting vector L2 norm;
determining a plurality of the fitting equations, the mean vector L2 norm, and the fitting vector L2 norm as fitting data.
4. The image analysis based data output method of claim 3, obtaining feature point mean vectors from the sub-matrix feature point array queue and fitting vectors from the fitting queue, and calculating a mean vector L2 norm and a fitting vector L2 norm from the feature point mean vectors and the fitting vectors, comprising:
acquiring a characteristic point mean vector from the sub-matrix characteristic point array queue, and storing the characteristic point mean vector to a first temporary vector;
acquiring a fitting vector from the fitting queue, and storing the fitting vector to a second temporary vector;
and calculating the first temporary vector and the second temporary vector according to a preset norm calculation formula to obtain a mean vector L2 norm and a fitting vector L2 norm.
5. A data output apparatus based on image analysis, comprising:
the acquiring unit is used for acquiring a feature point coordinate array in the image to be processed;
the analysis unit is used for analyzing the feature point coordinate array according to a preset mode to obtain data to be output corresponding to the feature point coordinate array, wherein the data to be output at least comprises data of one data type;
the output unit is used for outputting the data to be output in an output mode corresponding to the data type of the data to be output;
wherein the acquisition unit includes:
the processing subunit is used for preprocessing the image to be processed to obtain a result image corresponding to the image to be processed;
the processing subunit is further configured to process the result image through a feature extraction algorithm to obtain a feature point coordinate array corresponding to the result image;
wherein the analysis unit comprises:
the segmentation subunit is used for segmenting the result image according to a preset mode to obtain a plurality of sub-matrixes corresponding to the result image;
the obtaining subunit is used for obtaining a feature point coordinate subarray corresponding to each submatrix from the feature point coordinate subarrays;
the calculation subunit is used for calculating a plurality of feature point coordinate subarrays to obtain feature point central points corresponding to the sub-matrixes, and inserting the feature point central points into the sub-matrix feature point array queues corresponding to the feature point central points;
the fitting subunit is used for respectively fitting each sub-matrix characteristic point array queue to obtain fitting data of each sub-matrix characteristic point array queue;
the determining subunit is configured to determine the feature point coordinate array, a plurality of feature point coordinate subarrays, a plurality of feature point central points, and a plurality of fitting data as data to be output corresponding to the feature point coordinate array, where the feature point coordinate array is data to be output of a feature point array type, the feature point coordinate subarray is data to be output of a feature point subarray type, the feature point central point is data to be output of a central point type, and the fitting data is data to be output of a fitting type.
6. The image analysis-based data output device of claim 5, the segmentation subunit comprising:
the conversion module is used for converting the result image into a tensor matrix and acquiring the matrix width and the matrix height of the tensor matrix;
and the dividing module is used for dividing the tensor matrix based on the matrix width, the matrix height and a preset mode to obtain a plurality of sub-matrices corresponding to the tensor matrix.
7. The image analysis-based data output device of claim 5 or 6, the fitting subunit comprising:
the fitting module is used for respectively fitting each sub-matrix characteristic point array queue through a second-order fitting equation to obtain a fitting equation of each sub-matrix characteristic point array queue;
the inserting module is used for inserting each fitting equation into the corresponding fitting queue, and the fitting queues are in one-to-one correspondence with the sub-matrix characteristic point array queues;
the obtaining module is used for obtaining a characteristic point mean vector from the sub-matrix characteristic point array queue, obtaining a fitting vector from the fitting queue, and calculating according to the characteristic point mean vector and the fitting vector to obtain a mean vector L2 norm and a fitting vector L2 norm;
a determination module for determining a plurality of the fitting equations, the mean vector L2 norm, and the fitting vector L2 norm as fitting data.
8. The image analysis-based data output device of claim 7, the acquisition module comprising:
the storage submodule is used for acquiring a characteristic point mean vector from the submatrix characteristic point array queue and storing the characteristic point mean vector to a first temporary vector;
the storage sub-module is further configured to obtain a fitting vector from the fitting queue and store the fitting vector to a second temporary vector;
and the calculation submodule is used for calculating the first temporary vector and the second temporary vector according to a preset norm calculation formula to obtain a mean vector L2 norm and a fitting vector L2 norm.
9. A storage medium storing a program, the storage medium storing a computer program which, when executed by a processor, implements the image analysis-based data output method according to any one of claims 1 to 4.
10. A computing device comprising the storage medium of claim 9.
CN202011422905.XA 2020-12-08 2020-12-08 Data output method, device, medium and computing equipment based on image analysis Active CN112287951B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011422905.XA CN112287951B (en) 2020-12-08 2020-12-08 Data output method, device, medium and computing equipment based on image analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011422905.XA CN112287951B (en) 2020-12-08 2020-12-08 Data output method, device, medium and computing equipment based on image analysis

Publications (2)

Publication Number Publication Date
CN112287951A CN112287951A (en) 2021-01-29
CN112287951B true CN112287951B (en) 2021-04-06

Family

ID=74426858

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011422905.XA Active CN112287951B (en) 2020-12-08 2020-12-08 Data output method, device, medium and computing equipment based on image analysis

Country Status (1)

Country Link
CN (1) CN112287951B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111599432A (en) * 2020-05-29 2020-08-28 上海优医基医疗影像设备有限公司 Three-dimensional craniofacial image feature point mark analysis system and method
US20200334853A1 (en) * 2018-03-06 2020-10-22 Fotonation Limited Facial features tracker with advanced training for natural rendering of human faces in real-time
CN111814711A (en) * 2020-07-15 2020-10-23 中国矿业大学 Image feature fast matching method and system applied to mine machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200334853A1 (en) * 2018-03-06 2020-10-22 Fotonation Limited Facial features tracker with advanced training for natural rendering of human faces in real-time
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111599432A (en) * 2020-05-29 2020-08-28 上海优医基医疗影像设备有限公司 Three-dimensional craniofacial image feature point mark analysis system and method
CN111814711A (en) * 2020-07-15 2020-10-23 中国矿业大学 Image feature fast matching method and system applied to mine machine vision

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于透视不变二值特征描述子的图像匹配算法";耿利川 等;《通信学报》;20150425;全文 *

Also Published As

Publication number Publication date
CN112287951A (en) 2021-01-29

Similar Documents

Publication Publication Date Title
EP3627437B1 (en) Data screening device and method
EP3621034B1 (en) Method and apparatus for calibrating relative parameters of collector, and storage medium
JP6902122B2 (en) Double viewing angle Image calibration and image processing methods, equipment, storage media and electronics
CN109272442B (en) Method, device and equipment for processing panoramic spherical image and storage medium
CN112365521B (en) Speed monitoring method and device of terminal equipment, medium and computing equipment
CN111311593A (en) Multi-ellipse detection and evaluation algorithm, device, terminal and readable storage medium based on image gradient information
CN112825199A (en) Collision detection method, device, equipment and storage medium
CN116188805B (en) Image content analysis method and device for massive images and image information network
CN112287951B (en) Data output method, device, medium and computing equipment based on image analysis
CN115457202B (en) Method, device and storage medium for updating three-dimensional model
CN112197708A (en) Measuring method and device, electronic device and storage medium
CN116977671A (en) Target tracking method, device, equipment and storage medium based on image space positioning
CN108629219B (en) Method and device for identifying one-dimensional code
CN112861874B (en) Expert field denoising method and system based on multi-filter denoising result
CN114882465A (en) Visual perception method and device, storage medium and electronic equipment
CN113762173A (en) Training method and device for human face light stream estimation and light stream value prediction model
CN110309335B (en) Picture matching method, device and equipment and storage medium
CN115731256A (en) Vertex coordinate detection method, device, equipment and storage medium
CN113379826A (en) Method and device for measuring volume of logistics piece
CN110647826A (en) Method and device for acquiring commodity training picture, computer equipment and storage medium
CN112991428A (en) Box volume measuring method and device, computer equipment and storage medium
CN112258550B (en) Movement direction monitoring method, medium and device of terminal equipment and computing equipment
CN116109815B (en) Positioning method and device for test card calculation area and terminal equipment
EP4310784A1 (en) Image processing apparatus, image processing method, and program
CN113177903B (en) Fusion method, system and equipment of foreground point cloud and background point cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant