CN112381773B - Key cross section data analysis method, device, equipment and storage medium - Google Patents

Key cross section data analysis method, device, equipment and storage medium Download PDF

Info

Publication number
CN112381773B
CN112381773B CN202011221755.6A CN202011221755A CN112381773B CN 112381773 B CN112381773 B CN 112381773B CN 202011221755 A CN202011221755 A CN 202011221755A CN 112381773 B CN112381773 B CN 112381773B
Authority
CN
China
Prior art keywords
section
convolution module
layer
convolution
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011221755.6A
Other languages
Chinese (zh)
Other versions
CN112381773A (en
Inventor
杨艺兴
梁翠玲
罗海英
龚仕涛
莫家凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongfeng Liuzhou Motor Co Ltd
Original Assignee
Dongfeng Liuzhou Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongfeng Liuzhou Motor Co Ltd filed Critical Dongfeng Liuzhou Motor Co Ltd
Priority to CN202011221755.6A priority Critical patent/CN112381773B/en
Publication of CN112381773A publication Critical patent/CN112381773A/en
Application granted granted Critical
Publication of CN112381773B publication Critical patent/CN112381773B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/10Geometric CAD
    • G06F30/15Vehicle, aircraft or watercraft design
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a method, a device, equipment and a storage medium for analyzing key section data, wherein the method comprises the following steps: determining the key coordinate offset of the section according to the section image, adjusting the section image according to the standard section image to obtain a predicted section image, and correcting the key coordinate offset of the section according to the key coordinate offset of the standard section image to obtain key coordinate information of the predicted section; according to the method and the device, classification information and target position information are obtained according to the predicted section image and the predicted section key coordinate information, and then a section data analysis report is generated.

Description

Key cross section data analysis method, device, equipment and storage medium
Technical Field
The invention relates to the technical field of vehicles, in particular to a method, a device, equipment and a storage medium for analyzing key section data.
Background
At present, a section acquisition mode of a vehicle part can be acquired through software secondary development, but manual intervention is also needed, wherein a section key coordinate offset needed in a section is also needed to be acquired according to a human experience value, so that acquired section data are not accurate, and a mode of generating a section data analysis report according to the section data is manual manufacturing, so that the working efficiency of generating the section data analysis report is reduced.
The above is only for the purpose of assisting understanding of the technical solution of the present invention, and does not represent an admission that the above is the prior art.
Disclosure of Invention
The invention mainly aims to provide a method, a device, equipment and a storage medium for analyzing key section data, and aims to solve the technical problems of accurately acquiring section data and improving the working efficiency of generating a section data analysis report.
In order to achieve the above object, the present invention provides a method for analyzing key section data, wherein the method for analyzing key section data comprises:
acquiring a section image of a three-dimensional model of a vehicle part on a preset plane, and determining the key coordinate offset of the section according to the section image;
Adjusting the section image according to the standard section image to obtain a predicted section image, and correcting the key coordinate offset of the section according to the key coordinate offset of the standard section to obtain key coordinate information of the predicted section;
processing according to the predicted section image and the predicted section key coordinate information to obtain classification information and target position information of the predicted section image;
and generating a section data analysis report according to the classification information and the target position information.
Optionally, the step of acquiring a cross-sectional image of the three-dimensional model of the vehicle part on a preset plane includes:
acquiring a vehicle part three-dimensional model, and determining section position information according to the vehicle part three-dimensional model;
determining a model offset value and a model offset angle according to the section position information;
and obtaining a section image on a preset plane according to the model deviation value and the model deviation angle.
Optionally, the step of determining a critical coordinate offset of the cross section according to the cross section image includes:
converting the section image into a section pixel matrix;
and performing convolution processing and pooling processing on the section pixel matrix, determining section regression information, and determining the section key coordinate offset according to the section regression information.
Optionally, before the step of processing according to the predicted cross-section image and the key coordinate information of the predicted cross-section to obtain the classification information and the target position information of the predicted cross-section image, the method further includes:
determining predicted section output data according to the predicted section image, the predicted section key coordinate information, a standard section, coordinate information output and standard coordinate data;
determining a prediction probability value according to the output data of the prediction section;
judging whether the predicted probability value is smaller than a preset probability threshold value or not;
and when the prediction probability value is smaller than the preset probability threshold value, executing the step of processing according to the prediction section image and the key coordinate information of the prediction section to obtain the classification information and the target position information of the prediction section image.
Optionally, after the step of determining whether the predicted probability value is smaller than a preset probability threshold, the method further includes:
and when the predicted probability value is greater than or equal to the preset probability threshold value, returning to the step of determining the key coordinate offset of the cross section according to the cross section image.
Optionally, after the step of generating a cross-section data analysis report according to the classification information and the target position information, the method further includes:
And storing the section image, the classification information and the target position information into a section attribute database.
In addition, in order to achieve the above object, the present invention further provides a key cross-section data analysis device, including:
the acquisition module is used for acquiring a section image of the vehicle part three-dimensional model on a preset plane and determining the key coordinate offset of the section according to the section image;
the adjusting module is used for adjusting the section image according to the standard section image to obtain a predicted section image, and correcting the key coordinate offset of the section according to the key coordinate offset of the standard section to obtain key coordinate information of the predicted section;
the processing module is used for processing according to the predicted section image and the predicted section key coordinate information to obtain classification information and target position information of the predicted section image;
and the generation module is used for generating a section data analysis report according to the classification information and the target position information.
In addition, to achieve the above object, the present invention further provides a key cross-section data analysis apparatus, including: a memory, a processor and a critical cross-section data analysis program stored on the memory and executable on the processor, the critical cross-section data analysis program configured to implement the steps of the critical cross-section data analysis method as described above.
In addition, to achieve the above object, the present invention further provides a storage medium, on which a key cross-section data analysis program is stored, and the key cross-section data analysis program, when executed by a processor, implements the steps of the key cross-section data analysis method as described above.
The method comprises the steps of firstly obtaining a section image of a three-dimensional model of the vehicle part on a preset plane, determining section key coordinate offset according to the section image, then adjusting the section image according to a standard section image to obtain a predicted section image, correcting the section key coordinate offset according to the standard section key coordinate offset to obtain predicted section key coordinate information, then processing according to the predicted section image and the predicted section key coordinate information to obtain classification information and target position information of the predicted section image, and generating a section data analysis report according to the classification information and the target position information.
Drawings
Fig. 1 is a schematic structural diagram of a key cross-section data analysis device of a hardware operating environment according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a first embodiment of a critical section data analysis method according to the present invention;
fig. 3 is a block diagram of a first embodiment of a critical section data analysis apparatus according to the present invention.
The implementation, functional features and advantages of the present invention will be further described with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a critical section data analysis device of a hardware operating environment according to an embodiment of the present invention.
As shown in fig. 1, the critical section data analysis apparatus may include: a processor 1001, such as a Central Processing Unit (CPU), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used to implement connection communication among these components. The user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The Memory 1005 may be a Random Access Memory (RAM) Memory, or may be a Non-Volatile Memory (NVM), such as a disk Memory. The memory 1005 may alternatively be a storage device separate from the processor 1001.
Those skilled in the art will appreciate that the configuration shown in FIG. 1 does not constitute a limitation of the critical cross-sectional data analysis apparatus, and may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
As shown in fig. 1, a memory 1005, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, and a key section data analysis program.
In the critical section data analysis apparatus shown in fig. 1, the network interface 1004 is mainly used for data communication with a network server; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 of the critical section data analysis device of the present invention may be provided in the critical section data analysis device, and the critical section data analysis device calls the critical section data analysis program stored in the memory 1005 through the processor 1001 and executes the critical section data analysis method provided by the embodiment of the present invention.
An embodiment of the present invention provides a method for analyzing key cross-section data, and referring to fig. 2, fig. 2 is a schematic flowchart of a first embodiment of the method for analyzing key cross-section data according to the present invention.
In this embodiment, the method for analyzing the critical section data includes the following steps:
step S10: and acquiring a section image of the three-dimensional model of the vehicle part on a preset plane, and determining the key coordinate offset of the section according to the section image.
It is easy to understand that the execution subject of the embodiment may be a key cross-section data analysis device with functions of image processing, data processing, network communication, program operation, and the like, and may also be other computer devices with similar functions, and the embodiment is not limited thereto.
It is understood that the vehicle component three-dimensional model may be a 3D model of a vehicle component, where the vehicle component may be a vehicle door, a tire, or the like, and the embodiment is not limited thereto.
The sectional image is a two-dimensional image, that is, a planar image, and is split from any position in the vehicle part three-dimensional model, so that the sectional image on the preset plane corresponding to the split position can be obtained, which is not limited in this embodiment.
The critical cross-section coordinate offset may be a coordinate offset corresponding to a cutting position in a three-dimensional model of a vehicle component, and the like, and this embodiment is not limited.
Further, the method for acquiring the cross-sectional image of the three-dimensional model of the vehicle component on the preset plane may be to acquire the three-dimensional model of the vehicle component, determine cross-sectional position information according to the three-dimensional model of the vehicle component, determine a model offset value and a model offset angle according to the cross-sectional position information, and acquire the cross-sectional image on the preset plane according to the model offset value and the model offset angle, where the model offset value may be a coordinate, a numerical value, or the like, and the model offset angle may be 15 degrees at the upper left, 16 degrees at the lower right, or the like, and this embodiment is not limited.
The section position information may be a cut coordinate or an angle of the section, or the like.
Further, for ease of understanding, the following is illustrated:
the method comprises the following steps: and after the origin point is defined, calling an Application Programming Interface (CATIA API) program to create a datum plane according to the defined offset value along the defined offset direction, wherein the datum plane can intersect with the body object of the selected assembly, and an intersection line of the intersection position can be created according to the API provided by the CATIA.
Step two: the offset angle, the deflection angle of which can be determined from the direction of movement and the perpendicular and direction of movement through the origin, from which angle of rotation a reference plane can be determined, the reference plane selecting the assembly of the volumetric objects to intersect, and the cross-section creation process is as above.
Step three: selecting an existing plane, carrying out offset based on the existing plane, reading an offset value in a program, and transmitting the offset value into an interface provided by the CATIA, wherein the interface requires to input an original plane and an offset distance value. After a new datum plane is created, the selected assembly is identified, the body objects below the assembly are read, and intersection solution is performed with the newly created datum.
Step four: and (3) a part section is created, a curve position is input, an intersecting line of the curve position and the selected assembly body can be solved, an offset surface can be created through a set offset value, the offset surface and the assembly body are intersected to obtain an intersecting line, and the intersecting line is projected to a plane where the intersecting line is located, so that a section effect at the position, namely a section image on a preset plane, can be obtained.
Step five: the sectional image can be input into a deep learning object detection network model, and the sectional type and the coordinate offset can be determined through the processing of the object detection model.
Wherein, the first and the second end of the pipe are connected with each other, an input layer, a first convolution module, a second convolution module, a third convolution module, a first Add layer, a fourth convolution module, a second Add layer, a fifth convolution module, a third Add layer, a sixth convolution module, a seventh convolution module, a fourth Add layer, an eighth convolution module, a fifth Add layer, a ninth convolution module, a sixth Add layer, a tenth convolution module, a seventh Add layer, an eleventh convolution, a twelfth convolution module, an eighth Add layer, a thirteenth convolution module, a ninth Add layer, a fourteenth convolution module, a tenth Add layer, a fifteenth convolution module, an eleventh Add layer, a sixteenth convolution module, a ninth convolution module, a tenth Add layer, a fifteenth convolution module, a sixteenth convolution module, a sixth convolution module a twelfth Add layer, a seventeenth convolution module, a thirteenth Add layer, an eighteenth convolution module, a nineteenth convolution module, a fourteenth Add layer, a twentieth convolution module, a fifteenth Add layer, a twenty-first convolution module, a sixteenth Add layer, a twenty-second convolution, a twenty-third convolution, a seventeenth Add layer, a twenty-fourth convolution module, a twenty-fifth convolution module, an eighteenth Add layer, a twenty-sixth convolution module, a twenty-seventh convolution module, a twenty-eighth convolution module, a twenty-ninth convolution module, a thirty-fifth convolution module, a thirty-second convolution module, a RegressBoxes layer, a ClipBoxes layer, a filterDetections layer.
In a deep learning object detection network, the penalty value used is calculated by the following penalty function Lfl:
Figure BDA0002763107770000061
in the formula, y' is the inference output of the deep convolution neural network to the sample input image, namely sample prediction section output data, y is whether an evaluation object is a foreground, gamma is the rate of adjusting the weight reduction of a simple sample, gamma belongs to [0,5], alpha is a weight factor used for balancing the importance of positive/negative number samples, and alpha belongs to [0,1].
Further, the method for determining the key coordinate offset of the cross section according to the cross section image may be to convert the cross section image into a cross section pixel matrix, perform convolution and pooling on the cross section pixel matrix, determine cross section regression information, and determine the key coordinate offset of the cross section according to the cross section regression information.
In a specific implementation, the input layer is a matrix of 460 × 600 × 3 pixels that inputs the cross-sectional picture into the network; the first convolution module has a convolution kernel size of 7 × 7, a convolution kernel number of 64, and an output size of 230 × 300 × 64.
The input of the second convolution module is the output of the first convolution module, which in turn sets convolution layer 1, convolution layer 2, and convolution layer 3. The convolution kernels of convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 64, 64 and 256; the output of this module is a matrix of 115 × 150 × 256.
The input of the third convolution module is a first pooling layer, the module is provided with 1 convolution layer, the size of convolution kernels is 1 × 1, and the number of the convolution kernels is 256; the output of this module is a matrix of 115 × 150 × 256.
The first Add layer adds the output of the second convolution module and the output of the third convolution module; followed by a RELU activation layer, the module outputs a matrix of size 115 × 150 × 256.
The input of the fourth convolution module is the output matrix of the first Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 64, 64 and 256 respectively; the output of this module is a matrix of 115 × 150 × 256.
The second Add layer adds the output of the first Add layer and the output of the fourth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 115 × 150 × 256.
The input of the fifth convolution module is the output matrix of the second Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 64, 64 and 256 respectively; the output of this module is a matrix of 115 × 150 × 256.
The third Add layer adds the output of the second Add layer to the output of the fifth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 115 × 150 × 256.
The input of the sixth convolution module is the output matrix of the third Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. Convolution kernels of convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 128, 128 and 256; the output of this module is a 58 x 75 x 512 matrix.
The input of the seventh convolution module is an output matrix of a third Add layer, the module is provided with 1 convolution layer, the size of a convolution kernel is 1 × 1, and the number of the convolution kernels is 512; the output of this module is a 58 x 75 x 512 matrix.
The fourth Add layer adds the output of the sixth convolution module and the output of the seventh convolution module; followed by a RELU activation layer, the module outputs a matrix size of 58 x 75 x 512.
The input of the eighth convolution module is the output matrix of the fourth Add layer, and the module is sequentially provided with a convolution layer 1, a convolution layer 2 and a convolution layer 3. The convolution kernel sizes of convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 128, 128 and 256; the output of this module is a 58 x 75 x 512 matrix.
The fifth Add layer adds the output of the fourth Add layer to the output of the eighth convolution module; followed by a RELU activation layer, the module outputs a matrix size of 58 x 75 x 512.
The input of the ninth convolution module is the output matrix of the fifth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. Convolution kernels of convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 128, 128 and 256; the output of this module is a 58 x 75 x 512 matrix.
The sixth Add layer adds the output of the fifth Add layer to the output of the ninth convolution module; followed by a RELU activation layer, the module outputs a matrix size of 58 x 75 x 512.
The input of the tenth convolution module is the output matrix of the sixth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 256, 256 and 1024 respectively; the output of this module is a 58 x 75 x 512 matrix.
The seventh Add layer adds the output of the sixth Add layer to the output of the tenth convolution module; followed by a RELU activation layer, the module outputs a matrix size of 58 x 75 x 512.
The input of the eleventh convolution module is an output matrix of a third Add layer, the module is provided with 1 convolution layer, the size of a convolution kernel is 1 × 1, and the number of the convolution kernels is 1024; the output of this module is a matrix of 29 x 38 x 1024.
The input of the twelfth convolution module is the output matrix of the seventh Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. Convolution kernels of convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 256, 256 and 1024 respectively; the output of this module is a matrix of 29 x 38 x 1024.
The eighth Add layer adds the output of the seventh Add layer to the output of the twelfth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 29 x 38 x 1024.
The input of the thirteenth convolution module is the output matrix of the eighth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 256, 256 and 1024 respectively; the output of this module is a matrix of 29 x 38 x 1024.
The ninth Add layer adds the output of the eighth Add layer and the output of the thirteenth convolution module; followed by a RELU activation layer, which outputs a matrix of size 29 x 38 x 1024.
The input of the fourteenth convolution module is the output matrix of the ninth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. Convolution kernels of convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 256, 256 and 1024 respectively; the output of this module is a matrix of 29 x 38 x 1024.
The tenth Add layer adds the output of the ninth Add layer to the output of the fourteenth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 29 x 38 x 1024.
The input of the fifteenth convolution module is the output matrix of the tenth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. Convolution kernels of convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 256, 256 and 1024 respectively; the output of this module is a matrix of 29 x 38 x 1024.
The eleventh Add layer adds the output of the tenth Add layer and the output of the fifteenth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 29 x 38 x 1024.
The input of the sixteenth convolution module is the output matrix of the eleventh Add layer, and the module is sequentially provided with a convolution layer 1, a convolution layer 2 and a convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are 1 × 1,3 × 3 and 1 × 1 respectively, and the number of convolution kernels is 256, 256 and 1024 respectively; the output of this module is a matrix of 29 x 38 x 1024.
The twelfth Add layer adds the output of the eleventh Add layer to the output of the sixteenth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 29 x 38 x 1024.
The input of the seventeenth convolution module is the output matrix of the twelfth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 512, 512 and 2048; the output of this module is a matrix of 29 x 38 x 1024.
The thirteenth Add layer adds the output of the twelfth Add layer to the output of the seventeenth convolution module; followed by a RELU activation layer, the module outputs a matrix of size 29 x 38 x 1024.
The input of the eighteenth convolution module is the output matrix of the thirteenth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 512, 512 and 2048; the output of this module is a 15 x 19 x 2048 matrix.
The input of the nineteenth convolution module is an output matrix of a thirteenth Add layer, the module is provided with 1 convolution layer, the size of a convolution kernel is 1 x 1, and the number of the convolution kernels is 256; the output of this module is a matrix of 15 × 19 × 2048.
The fourteenth Add layer adds the output of the eighteenth convolution module and the output of the nineteenth convolution module; followed by a RELU activation layer, which outputs a matrix of size 15 × 19 × 2048.
The input of the twentieth convolution module is the output matrix of the fourteenth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 512, 512 and 2048; the output of this module is a 15 x 19 x 2048 matrix.
The fifteenth Add layer adds the output of the twentieth convolution module and the output of the thirteenth Add layer; followed by a RELU activation layer, which outputs a matrix of size 15 × 19 × 2048.
The input of the twenty-first convolution module is the output matrix of the fifteenth Add layer, and the module is sequentially provided with convolution layer 1, convolution layer 2 and convolution layer 3. The convolution kernel sizes of the convolution layers 1, 2 and 3 are respectively 1 × 1,3 × 3 and 1 × 1, and the number of convolution kernels is respectively 512, 512 and 2048; the output of this module is a 15 x 19 x 2048 matrix.
The sixteenth Add layer adds the output of the twenty-first convolution module and the output of the fifteenth Add layer; followed by a RELU activation layer, which outputs a matrix of size 15 × 19 × 2048.
The input of the twenty-second convolution module is an output matrix of a sixteenth Add layer, the module is provided with 1 convolution layer, the size of convolution kernels is 1 × 1, and the number of the convolution kernels is 256 respectively; the output of this module is a 15 × 19 × 256 matrix.
The input of the twenty-third convolution module is an output matrix of a thirteenth Add layer, the module is provided with 1 convolution layer, the size of a convolution kernel is 1 × 1, and the number of the convolution kernels is 256; the output of this module is a 15 × 19 × 256 matrix.
The seventeenth Add layer adds the up-sampled outputs of the twenty-second and twenty-third convolution modules, which output a matrix of size 15 x 19 x 256.
The input of the twenty-fourth convolution module is an output matrix of a sixteenth Add layer, the module is provided with 1 convolution layer, the size of convolution kernels is 1 × 1, and the number of convolution kernels is 256; the output of this module is a matrix of 8 x 10 x 256.
The input of the twenty-fifth convolution module is an output matrix of a seventh Add layer, the module is provided with 1 convolution layer, the size of a convolution kernel is 1 × 1, and the number of the convolution kernels is 256; the output of this module is a 15 × 19 × 256 matrix.
The eighteenth Add layer adds the output of the up-sampled sixteenth Add layer to the output of the twenty-fifth convolution module, which outputs a matrix of size 15 x 19 x 256.
The input of the twenty-sixth convolution module is an output matrix of a seventeenth Add layer, the module is provided with 1 convolution layer, the size of convolution kernels is 3 × 3, and the number of convolution kernels is 256; the output of this module is a 15 × 19 × 256 matrix.
The input of the twenty-seventh convolution module is the output matrix of the twenty-second convolution module, the module is provided with 1 convolution layer, the convolution kernel size is 3 × 3, and the number of convolution kernels is 256; the output of this module is a 15 × 19 × 256 matrix.
The input of the twenty-eighth convolution module is an output matrix of an eighteenth Add layer, the module is provided with 1 convolution layer, the size of convolution kernels is 3 × 3, and the number of convolution kernels is 256; the output of this module is a 15 × 19 × 256 matrix.
The input of the twenty-ninth convolution module is the output matrix of the twenty-fourth convolution module, which is provided with 1 convolution layer, the convolution kernel size is 3 × 3, and the number of convolution kernels is 256; the output of this module is a 15 × 19 × 256 matrix.
The input of the thirty-eighth convolution module is the output matrix of the twenty-eighth convolution module, which is sequentially provided with convolution layer 1, convolution layer 2, convolution layer 3, convolution layer 4 and convolution layer 5. The convolution kernel sizes of convolution layers 1, 2 and 3 are all 3 × 3, the number of convolution kernels of the first 4 layers is 256, and the number of convolution kernels of convolution layer 5 is 36; the output of this module is a matrix of 15 x 19 x 256.
The input of the thirty-first convolution module is the output matrix of the twenty-eighth convolution module, which is sequentially provided with convolution layer 1, convolution layer 2, convolution layer 3, convolution layer 4 and convolution layer 5. The convolution kernels of convolution layers 1, 2 and 3 are all 3 × 3, the number of convolution kernels of the first 4 layers is 256, and the number of convolution kernels of convolution layer 5 is 54; the output of this module is a matrix of 15 × 19 × 54.
The input of the thirty-second convolution module is the output matrix of the thirty-first convolution module convolution layer 1, the module is provided with 1 convolution layer, the sizes of convolution kernels are all 3 × 3, and the number of convolution kernels is 27; the output of this module is a matrix of 15 × 19 × 27.
The regressive box layer is input with anchormor generated by the twenty-fourth, twenty-sixth, twenty-seventh and twenty-eighth convolution modules through the concatenate operation and regressive information generated by the thirty-eighth convolution module output through the concatenate, and the regressive information output by the layer is offset of each coordinate, namely offset of a critical coordinate of a cross section.
Step S20: and adjusting the section image according to the standard section image to obtain a predicted section image, and correcting the key coordinate offset of the section according to the key coordinate offset of the standard section to obtain key coordinate information of the predicted section.
It can be understood that, in the deep learning target detection network model, the section image can be adjusted according to the standard section image to obtain a predicted section image, and the section key coordinate offset can be corrected according to the standard section key coordinate offset to obtain predicted section key coordinate information.
That is to say, the input of the ClipBoxes layer is the sectional image in the input layer and the sectional key coordinate offset of the output of the RegressBoxes layer, and the layer further adjusts and corrects the predicted model to obtain the predicted sectional image and the predicted sectional key coordinate information, and the like, which is not limited in this embodiment.
Step S30: and processing according to the predicted section image and the predicted section key coordinate information to obtain classification information and target position information of the predicted section image.
The classification information of the predicted cross-sectional image may be of a certain type, a vehicle door type, a tire type, or the like, and the present embodiment is not limited thereto.
It can also be understood that data allowance is performed in a filterDetections layer in the deep learning object detection network model, that is, the input of the filterDetections layer is the output of the ClipBoxesc layer and the classification information generated by the thirty-first convolution module and the thirty-second convolution module through the continate, and the classification information of the output image of the layer is the classification information of the predicted cross-section image and the position information of the image type object frame is the object position information.
Further, before the step of obtaining the classification information and the target position information of the predicted cross-section image by processing the predicted cross-section image and the key coordinate information of the predicted cross-section, the method may further include the steps of determining a cross-section prediction probability value according to the predicted cross-section image and the key coordinate information of the predicted cross-section, performing connection processing according to the predicted cross-section image and the key coordinate information of the predicted cross-section when the prediction probability value is smaller than a preset probability threshold according to whether the prediction probability value is smaller than the preset probability threshold, and obtaining the classification information and the target position information of the predicted cross-section image. And when the predicted probability value is greater than or equal to a preset probability threshold value, returning to the step of determining the key coordinate offset of the section according to the section image.
Step S40: and generating a section data analysis report according to the classification information and the target position information.
The section data analysis report may be automatically generated according to classification information, that is, target position information, where the section data analysis report may further include an engineer name, an employee number, a project, a section name, a part number, a level, a chinese-english name, a material, a key size, a section image, a main section performance parameter, and the like, which is not limited in this embodiment.
Further, after the step of generating the cross-section data analysis report according to the classification information and the target position information, the cross-section image and the classification information, that is, the target position information or the cross-section data analysis table, may also be stored in the cross-section attribute database, which is convenient for the user to view and analyze, and the like.
According to the method, firstly, a section image of a three-dimensional model of a vehicle part on a preset plane is obtained, a section key coordinate offset is determined according to the section image, then, the section image is adjusted according to a standard section image to obtain a predicted section image, the section key coordinate offset is corrected according to the standard section key coordinate offset to obtain predicted section key coordinate information, then, the predicted section image and the predicted section key coordinate information are processed to obtain classification information and target position information of the predicted section image, and a section data analysis report is generated according to the classification information and the target position information.
Referring to fig. 3, fig. 3 is a block diagram illustrating a first embodiment of a critical section data analysis apparatus according to the present invention.
As shown in fig. 3, the apparatus for analyzing critical cross-section data according to the embodiment of the present invention includes:
the acquisition module 4001 is configured to acquire a cross-section image of a three-dimensional model of a vehicle component on a preset plane, and determine a critical coordinate offset of the cross-section according to the cross-section image;
the adjusting module 4002 is configured to adjust the cross-section image according to the standard cross-section image to obtain a predicted cross-section image, and correct the offset of the key coordinate of the cross-section according to the offset of the key coordinate of the standard cross-section to obtain key coordinate information of the predicted cross-section;
the processing module 4003 is configured to process the predicted cross-section image and the key coordinate information of the predicted cross-section image to obtain classification information and target position information of the predicted cross-section image;
a generating module 4004, configured to generate a cross-section data analysis report according to the classification information and the target location information.
According to the method, firstly, a section image of a three-dimensional model of a vehicle part on a preset plane is obtained, a section key coordinate offset is determined according to the section image, then, the section image is adjusted according to a standard section image to obtain a predicted section image, the section key coordinate offset is corrected according to the standard section key coordinate offset to obtain predicted section key coordinate information, then, the predicted section image and the predicted section key coordinate information are processed to obtain classification information and target position information of the predicted section image, and a section data analysis report is generated according to the classification information and the target position information.
Further, the obtaining module 4001 is further configured to obtain a vehicle part stereo model, and determine cross-section position information according to the vehicle part stereo model;
the obtaining module 4001 is further configured to determine a model offset value and a model offset angle according to the section position information;
the obtaining module 4001 is further configured to obtain a cross-sectional image on a preset plane according to the model offset value and the model offset angle.
Further, the obtaining module 4001 is further configured to convert the cross-section image into a cross-section pixel matrix;
the acquisition module 4001 is further configured to perform convolution processing on the cross-section pixel matrix to obtain a cross-section convolution matrix;
the acquisition module 4001 is further configured to perform pooling processing on the section convolution matrix to obtain a section pooling matrix;
the obtaining module 4001 is further configured to determine section regression information according to the section convolution matrix and the section pooling matrix, and determine a section key coordinate offset according to the section regression information.
Further, the obtaining module 4001 is further configured to calculate the section convolution matrix and the section pooling matrix to obtain a section accumulation matrix;
The obtaining module 4001 is further configured to perform nonlinear processing on the fracture surface accumulation matrix to obtain a fracture surface activation matrix;
the obtaining module 4001 is further configured to determine section regression information according to the section activation matrix.
Further, the processing module 4003 is further configured to determine predicted cross-section output data according to the predicted cross-section image and the key coordinate information of the predicted cross-section;
the processing module 4003 is further configured to determine a predicted probability value according to the predicted section output data;
the processing module 4003 is further configured to determine whether the predicted probability value is smaller than a preset probability threshold;
the processing module 4003 is further configured to execute, when the predicted probability value is smaller than the preset probability threshold, the operation of performing connection processing according to the predicted cross-section image and the predicted cross-section key coordinate information to obtain classification information and target position information of the predicted cross-section image.
Further, the processing module 4003 is further configured to return to the operation of determining the key coordinate offset of the cross section according to the cross section image when the predicted probability value is greater than or equal to the preset probability threshold.
Further, the generating module 4004 is further configured to store the cross-sectional image, the classification information, and the target position information in a cross-sectional attribute database.
Other embodiments or specific implementation manners of the key section data analysis device of the present invention may refer to the above method embodiments, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrases "comprising one of ...does not exclude the presence of additional like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are only for description, and do not represent the advantages and disadvantages of the embodiments.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present invention or portions thereof contributing to the prior art may be embodied in the form of a software product, where the computer software product is stored in a storage medium (such as a rom/ram, a magnetic disk, and an optical disk), and includes several instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the methods according to the embodiments of the present invention.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the scope of the present invention, and all equivalent structures or equivalent processes performed by the present invention or directly or indirectly applied to other related technical fields are also included in the scope of the present invention.

Claims (10)

1. A key cross-section data analysis method is characterized by comprising the following steps:
acquiring a section image of a three-dimensional model of a vehicle part on a preset plane, and processing the section image through a preset target detection model to obtain the key coordinate offset of the section;
adjusting the section image through the preset target detection model and the standard section image to obtain a predicted section image, and correcting the key coordinate offset of the section according to the key coordinate offset of the standard section to obtain key coordinate information of the predicted section;
acquiring image classification information generated by the preset target detection model according to the predicted section image and the predicted section key coordinate information and position information of a target frame in the image; the image classification information is classification information of the predicted cross-section image, and the position information of the target frame is target position information;
Generating a section data analysis report according to the classification information and the target position information;
Wherein , the preset target detection model structure sequentially and logically sets an input layer , first convolution module , second convolution module , third convolution module , first Add layer , fourth convolution module , second Add layer , fifth convolution module , third Add layer , sixth convolution module , seventh convolution module , fourth Add layer , eighth convolution module , fifth Add layer , ninth convolution module , sixth Add layer , tenth convolution module , seventh Add layer , eleventh convolution module , twelfth convolution module , eighth Add layer , thirteenth convolution module , ninth Add layer , fourteenth convolution module , tenth Add layer , fifteenth convolution module , eleventh Add layer , sixteenth convolution module , twelfth Add layer , seventeenth convolution module , thirteenth Add layer , eighteenth convolution module , nineteenth convolution module , fourteenth Add layer , twentieth convolution module , the fifteenth Add layer , twenty-first convolution module , sixteenth Add layer , twenty-second convolution module , twenty-third convolution module , seventeenth Add layer , twenty fourth convolution module , twenty-fifth convolution module , eighteenth Add layer , twenty-sixth convolution module , twenty-seventh convolution module , twenty eighth convolution module , twenty ninth convolution module , thirty-th convolution module , thirty-one convolution module , thirty-second convolution module , regressBoxes layer , clipBoxes layer , filterDetections layer , the FilterDetections layer is used for outputting the image classification information and the position information of the target frame in the image , the ClipBox layer is used for adjusting the section image of the input layer , obtaining the predicted cross-sectional image , the ClipBox layer is also used for correcting the section key coordinate offset output by the RegressBox layer , and obtaining the key coordinate information of the predicted section. .
2. The method as claimed in claim 1, wherein the step of obtaining the cross-sectional image of the three-dimensional model of the vehicle part on the preset plane comprises:
acquiring a vehicle part three-dimensional model, and determining section position information according to the vehicle part three-dimensional model;
determining a model offset value and a model offset angle according to the section position information;
and obtaining a section image on a preset plane according to the model deviation value and the model deviation angle.
3. The method according to claim 1, wherein the step of processing the sectional image through a preset target detection model to obtain a critical coordinate offset of the sectional image comprises:
converting the section image into a section pixel matrix through the preset target detection model;
performing convolution processing on the section pixel matrix through the preset target detection model to obtain a section convolution matrix;
pooling the section convolution matrix through the preset target detection model to obtain a section pooling matrix;
and determining section regression information according to the section convolution matrix and the section pooling matrix through the preset target detection model, and determining the section key coordinate offset according to the section regression information.
4. The method of claim 3, wherein said step of determining section regression information based on said section convolution matrix and said section pooling matrix comprises:
calculating the section convolution matrix and the section pooling matrix to obtain a section accumulation matrix;
carrying out nonlinear processing on the section accumulation matrix to obtain a section activation matrix;
and determining section regression information according to the section activation matrix.
5. The method according to claim 1, wherein the obtaining of the image classification information generated by the preset target detection model according to the predicted cross-section image and the predicted cross-section key coordinate information and the position information of the target frame in the image; before the step of determining the image classification information as the classification information of the predicted cross-section image and the position information of the target frame as the target position information, the method further includes:
determining predicted section output data according to the predicted section image and the predicted section key coordinate information;
determining a prediction probability value according to the output data of the prediction section;
judging whether the predicted probability value is smaller than a preset probability threshold value or not;
when the prediction probability value is smaller than the preset probability threshold value, executing the image classification information generated by the preset target detection model according to the prediction section image and the key coordinate information of the prediction section and the position information of a target frame in the image; and the image classification information is the classification information of the prediction section image, and the position information of the target frame is the target position information.
6. The method of claim 5, wherein the step of determining whether the predicted probability value is less than a preset probability threshold further comprises:
and when the predicted probability value is greater than or equal to the preset probability threshold value, returning to the step of determining the key coordinate offset of the cross section according to the cross section image.
7. The method of claim 5, wherein the step of generating a profile data analysis report based on the classification information and the target location information further comprises:
and storing the section image, the classification information and the target position information into a section attribute database.
8. A critical section data analysis apparatus, characterized by comprising:
the acquisition module is used for acquiring a section image of the vehicle part three-dimensional model on a preset plane, and processing the section image through a preset target detection model to obtain the key coordinate offset of the section;
Adjusting module , is used for adjusting the section image through the preset target detection model and the standard section image , obtaining a predicted cross-sectional image , and correcting the key coordinate offset of the section according to the key coordinate offset of the standard section , obtaining the key coordinate information of the predicted section , wherein , the preset target detection model structure sequentially and logically sets an input layer , first convolution module , second convolution module , third convolution module , first Add layer , fourth convolution module , second Add layer , fifth convolution module , third Add layer , sixth convolution module , seventh convolution module , fourth Add layer , eighth convolution module , fifth Add layer , ninth convolution module , sixth Add layer , tenth convolution module , seventh Add layer , eleventh convolution module , twelfth convolution module , eighth Add layer , thirteenth convolution module , ninth Add layer , fourteenth convolution module , tenth Add layer , fifteenth convolution module , eleventh Add layer , sixteenth convolution module , twelfth Add layer , seventeenth convolution module , thirteenth Add layer , eighteenth convolution module , nineteenth convolution module , fourteenth Add layer , twentieth convolution module , the fifteenth Add layer , twenty-first convolution module , sixteenth Add layer , twenty-second convolution module , twenty-third convolution module , seventeenth Add layer , twenty fourth convolution module , twenty-fifth convolution module , eighteenth Add layer , twenty-sixth convolution module , twenty-seventh convolution module , twenty eighth convolution module , twenty ninth convolution module , thirty-th convolution module , thirty-one convolution module , thirty-second convolution module , regressBoxes layer , clipBoxes layer , filterDetections layer , the ClipBox layer is used for adjusting the section image of the input layer , obtaining the predicted cross-sectional image , The ClipBox layer is also used for correcting the section key coordinate offset output by the RegessBoxes layer to obtain the predicted section key coordinate information;
The processing module is used for acquiring image classification information generated by the preset target detection model according to the predicted section image and the predicted section key coordinate information and position information of a target frame in the image; the image classification information is classification information of the predicted cross-section image, the position information of the target frame is target position information, and the filterdetection layer is used for outputting the image classification information and the position information of the target frame in the image;
and the generation module is used for generating a section data analysis report according to the classification information and the target position information.
9. An electronic device, characterized in that the device comprises: a memory, a processor and a critical cross-sectional data analysis program stored on the memory and executable on the processor, the critical cross-sectional data analysis program configured to implement the steps of the critical cross-sectional data analysis method of any one of claims 1 to 7.
10. A computer-readable storage medium, characterized in that a critical section data analysis program is stored on the computer-readable storage medium, which when executed by a processor implements the steps of the critical section data analysis method of any of claims 1 to 7.
CN202011221755.6A 2020-11-05 2020-11-05 Key cross section data analysis method, device, equipment and storage medium Active CN112381773B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011221755.6A CN112381773B (en) 2020-11-05 2020-11-05 Key cross section data analysis method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011221755.6A CN112381773B (en) 2020-11-05 2020-11-05 Key cross section data analysis method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112381773A CN112381773A (en) 2021-02-19
CN112381773B true CN112381773B (en) 2023-04-18

Family

ID=74578760

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011221755.6A Active CN112381773B (en) 2020-11-05 2020-11-05 Key cross section data analysis method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112381773B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117591530B (en) * 2024-01-17 2024-04-19 杭银消费金融股份有限公司 Data cross section processing method and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003672A (en) * 2018-07-16 2018-12-14 北京睿客邦科技有限公司 A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning
CN109977943A (en) * 2019-02-14 2019-07-05 平安科技(深圳)有限公司 A kind of images steganalysis method, system and storage medium based on YOLO
CN111814744A (en) * 2020-07-30 2020-10-23 河南威虎智能科技有限公司 Face detection method and device, electronic equipment and computer storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7599560B2 (en) * 2005-04-22 2009-10-06 Microsoft Corporation Embedded interaction code recognition
JP6103526B2 (en) * 2013-03-15 2017-03-29 オリンパス株式会社 Imaging device, image display device, and display control method for image display device
CN104239904A (en) * 2014-09-28 2014-12-24 中南大学 Non-contact detection method for external outline of railway vehicle
US11080920B2 (en) * 2015-03-25 2021-08-03 Devar Entertainment Limited Method of displaying an object
CN109299644A (en) * 2018-07-18 2019-02-01 广东工业大学 A kind of vehicle target detection method based on the full convolutional network in region
CN109654998B (en) * 2019-02-28 2020-09-22 信阳同合车轮有限公司 Wheel detection method and system
CN110793501A (en) * 2019-11-19 2020-02-14 上海勘察设计研究院(集团)有限公司 Subway tunnel clearance detection method
CN111581859B (en) * 2020-03-31 2024-01-26 桂林电子科技大学 Ride comfort modeling analysis method and system for suspension coupling nonlinear commercial vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003672A (en) * 2018-07-16 2018-12-14 北京睿客邦科技有限公司 A kind of early stage of lung cancer detection classification integration apparatus and system based on deep learning
CN109977943A (en) * 2019-02-14 2019-07-05 平安科技(深圳)有限公司 A kind of images steganalysis method, system and storage medium based on YOLO
CN111814744A (en) * 2020-07-30 2020-10-23 河南威虎智能科技有限公司 Face detection method and device, electronic equipment and computer storage medium

Also Published As

Publication number Publication date
CN112381773A (en) 2021-02-19

Similar Documents

Publication Publication Date Title
CN113094770B (en) Drawing generation method and device, computer equipment and storage medium
CN112419202B (en) Automatic wild animal image recognition system based on big data and deep learning
CN111814905A (en) Target detection method, target detection device, computer equipment and storage medium
CN115482522A (en) Method, device and equipment for identifying corner characteristics of aircraft structural part and storage medium
CN112381773B (en) Key cross section data analysis method, device, equipment and storage medium
CN112683169A (en) Object size measuring method, device, equipment and storage medium
CN115457492A (en) Target detection method and device, computer equipment and storage medium
JP6647992B2 (en) Design support equipment
CN111259971A (en) Vehicle information detection method and device, computer equipment and readable storage medium
CN111160394A (en) Training method and device of classification network, computer equipment and storage medium
CN113269897A (en) Method, device and equipment for acquiring surface point cloud and storage medium
CN113378435A (en) Particle generation method, device, equipment and storage medium
CN112528428A (en) Method and device for displaying physical parameters of engineering structure and computer equipment
CN111768406A (en) Cell image processing method, device, equipment and storage medium
CN108805121B (en) License plate detection and positioning method, device, equipment and computer readable medium
CN112528500B (en) Evaluation method and evaluation equipment for scene graph construction model
CN111179337A (en) Spatial straight line orientation measuring method and device, computer equipment and storage medium
CN112669426B (en) Three-dimensional geographic information model rendering method and system based on generation countermeasure network
US20030225553A1 (en) Topology modeler
CN111626789A (en) House price prediction method, device, equipment and storage medium
CN110837779A (en) Vehicle appearance intelligent diagnosis method and device and computer readable storage medium
CN113706503B (en) Whole vehicle point cloud image analysis method, device, equipment and storage medium
CN116188805B (en) Image content analysis method and device for massive images and image information network
CN112597237A (en) B/S architecture-based weather typing visualization method and system, electronic device and medium
CN110472601B (en) Remote sensing image target object identification method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant