CN113706506B - Method and device for detecting assembly state, electronic equipment and storage medium - Google Patents

Method and device for detecting assembly state, electronic equipment and storage medium Download PDF

Info

Publication number
CN113706506B
CN113706506B CN202110996297.1A CN202110996297A CN113706506B CN 113706506 B CN113706506 B CN 113706506B CN 202110996297 A CN202110996297 A CN 202110996297A CN 113706506 B CN113706506 B CN 113706506B
Authority
CN
China
Prior art keywords
target object
positioning information
image
assembly state
state detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110996297.1A
Other languages
Chinese (zh)
Other versions
CN113706506A (en
Inventor
黄家水
许慎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ainnovation Chongqing Technology Co ltd
Original Assignee
Ainnovation Chongqing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ainnovation Chongqing Technology Co ltd filed Critical Ainnovation Chongqing Technology Co ltd
Priority to CN202110996297.1A priority Critical patent/CN113706506B/en
Publication of CN113706506A publication Critical patent/CN113706506A/en
Application granted granted Critical
Publication of CN113706506B publication Critical patent/CN113706506B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The application belongs to the technical field of detection and discloses a method, a device, electronic equipment and a storage medium for detecting an assembly state, wherein the method comprises the steps of inputting an image to be detected into a pre-trained positioning model to obtain key point positioning information of a first target object in the image to be detected; estimating a second target object image area of the second target object according to the positioning information of the key points and the relative position relation between the first target object and the second target object; cutting the image to be detected according to the second target object image area to obtain an assembly state detection area diagram; and inputting the assembly state detection area diagram into a pre-trained assembly state detection model to obtain an assembly state detection result. Thus, when the assembly state between the target objects is detected, the problem of poor robustness caused by external factors is reduced, and the accuracy of the assembly state detection is improved.

Description

Method and device for detecting assembly state, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of detection technologies, and in particular, to a method and apparatus for detecting an assembly state, an electronic device, and a storage medium.
Background
With the continuous development of computer vision technology, computer vision technology is gradually applied to the field of industrial quality inspection. In practical applications, a computer vision technology is generally used to perform image analysis on a quality inspection image to detect an assembly state between a plurality of target objects in the quality inspection image, for example, an assembly state between a piston ring and a piston.
In the prior art, binarization or difference processing is generally performed on the quality inspection image, and whether the assembly state of each target object meets the quality inspection requirement is judged according to the processed quality inspection image.
However, due to factors such as illumination, defocus, angle and shielding, a certain influence is caused on the detection result, so that the accuracy of the obtained detection result is low by adopting an assembly state detection mode of binarization or differential processing.
Thus, how to improve the accuracy of the assembly state detection when detecting the assembly state between the target objects is a technical problem to be solved.
Disclosure of Invention
The invention aims to provide a method, a device, electronic equipment and a storage medium for detecting an assembly state, which are used for improving the accuracy of the assembly state detection when the assembly state between target objects is detected.
In one aspect, a method of assembly status detection includes:
inputting an image to be detected into a pre-trained positioning model to obtain key point positioning information of a first target object in the image to be detected, wherein the positioning model is constructed based on a deep learning algorithm;
estimating a second target object image area of the second target object according to the positioning information of the key points and the relative position relation between the first target object and the second target object;
cutting the image to be detected according to the second target object image area to obtain an assembly state detection area diagram;
and inputting the assembly state detection area diagram into a pre-trained assembly state detection model to obtain an assembly state detection result, wherein the assembly state detection model is constructed based on a deep learning algorithm, and the assembly state detection result is used for representing the assembly state between the first target object and the second target object.
In the implementation process, a model constructed based on a deep learning algorithm is adopted to position the first target object and the second target object respectively, and image clipping and assembly state detection are carried out according to the positioning result, so that the target object is detected without relying on information such as image texture, brightness and the like during positioning and assembly state detection, the influence of complex and changeable environmental conditions on assembly state detection is reduced, and the accuracy of assembly state detection is improved.
In one embodiment, the positioning information of the key point includes first estimated positioning information and second estimated positioning information, the image to be detected is input to a pre-trained positioning model, and the positioning information of the key point of the first target object in the image to be detected is obtained, including:
extracting features of the image to be detected to obtain a feature image;
performing image processing on the characteristic image aiming at the first target object to obtain a thermodynamic diagram corresponding to the first target object;
determining first estimated positioning information of a first designated key point of a first target object according to the thermodynamic diagram;
determining the key point offset of the second designated key points of the first target object according to the characteristic image, wherein the key point offset is the offset between the designated number of the second designated key points of the first target object and the first designated key points respectively;
and determining second estimated positioning information according to the key point offset and the first estimated positioning information.
In the implementation process, the first appointed key point of the first target object is positioned according to the image to be detected, and the second appointed key point is positioned according to the key point offset between the first appointed key point and the second appointed key point according to the positioning result of the first appointed key point, so that the accurate positioning of the second appointed key point is realized.
In one embodiment, determining first estimated location information for a first specified keypoint of a first target object based on a thermodynamic diagram comprises:
respectively determining a first foreground probability of each pixel in the thermodynamic diagram as a pixel corresponding to a first target object;
determining initial estimated positioning information of a first designated key point according to the first foreground probability and the pixel value corresponding to each pixel;
determining coordinate point offset of a first designated key point according to the characteristic image;
and adjusting the initial estimated positioning information according to the coordinate point offset to obtain first estimated positioning information.
In the implementation process, the first appointed key point is initially positioned according to the first foreground probability that each pixel is the first target object and each pixel value, and the first appointed key point is further accurately positioned according to the coordinate point offset of the first appointed key point, so that the positioning accuracy of the first appointed key point is improved.
In one embodiment, determining initial pre-estimated location information of a first designated key point according to a first foreground probability and a pixel value corresponding to each pixel includes:
screening out first foreground probabilities meeting the screening conditions of the first target objects from the first foreground probabilities;
Determining the maximum pixel value in the pixel values corresponding to the screened at least one first foreground probability;
determining a coordinate value of an image coordinate point corresponding to the maximum pixel value;
and determining initial estimated positioning information according to the coordinate values.
In the implementation process, the first appointed key point is initially positioned according to the first foreground probability that each pixel is the first target object and each pixel value.
In one embodiment, estimating a second target object image area of the second target object according to the key point positioning information and the relative positional relationship between the first target object and the second target object includes:
carrying out affine transformation on the image to be detected according to the first estimated positioning information and the second estimated positioning information to obtain an affine transformation graph;
acquiring the image size of a first target object which is also output by the positioning model;
and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relation.
In the implementation process, the first target object is corrected and cut, and an effective assembly area of the second target object is obtained.
In one embodiment, inputting the assembly state detection area map to a pre-trained assembly state detection model to obtain an assembly state detection result, including:
carrying out semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object;
respectively determining a second foreground probability that each pixel in the semantic segmentation map is a second target object;
screening out second foreground probabilities meeting second target object screening conditions from the second foreground probabilities;
and obtaining an assembly state detection result according to the pixel corresponding to the screened second foreground probability.
In the implementation process, the image range of the image processing in the subsequent step and the data volume during the image processing can be reduced, so that the accuracy of the assembly state detection can be improved.
In one embodiment, the method for obtaining the assembly state detection result according to the pixel corresponding to the screened second foreground probability includes:
determining the image area of the pixel corresponding to the screened second foreground probability;
if the area of the image area is higher than the preset area threshold, judging whether the assembly state between the first target object and the second target object is the installation state or not according to the pixel corresponding to the screened second foreground probability.
In the implementation process, the semantic segmentation map with the image area lower than the preset area threshold can be removed, so that the accuracy of a detection result is ensured.
In one embodiment, the first target object is a piston and the second target object is a piston ring.
In one aspect, there is provided an apparatus for assembly state detection, comprising:
the positioning unit is used for inputting the image to be detected into a pre-trained positioning model to obtain the key point positioning information of the first target object in the image to be detected, wherein the positioning model is constructed based on a deep learning algorithm;
the estimating unit is used for estimating a second target object image area of the second target object according to the positioning information of the key points and the relative position relation between the first target object and the second target object;
the clipping unit is used for clipping the image to be detected according to the second target object image area to obtain an assembly state detection area diagram;
the detection unit is used for inputting the assembly state detection area diagram into a pre-trained assembly state detection model to obtain an assembly state detection result, wherein the assembly state detection model is constructed based on a deep learning algorithm, and the assembly state detection result is used for representing the assembly state between the first target object and the second target object.
In one embodiment, the positioning information of the key point includes first estimated positioning information and second estimated positioning information, and the positioning unit is configured to:
extracting features of the image to be detected to obtain a feature image;
performing image processing on the characteristic image aiming at the first target object to obtain a thermodynamic diagram corresponding to the first target object;
determining first estimated positioning information of a first designated key point of a first target object according to the thermodynamic diagram;
determining the key point offset of the second designated key points of the first target object according to the characteristic image, wherein the key point offset is the offset between the designated number of the second designated key points of the first target object and the first designated key points respectively;
and determining second estimated positioning information according to the key point offset and the first estimated positioning information.
In one embodiment, the positioning unit is configured to:
respectively determining a first foreground probability of each pixel in the thermodynamic diagram as a pixel corresponding to a first target object;
determining initial estimated positioning information of a first designated key point according to the first foreground probability and the pixel value corresponding to each pixel;
determining coordinate point offset of a first designated key point according to the characteristic image;
And adjusting the initial estimated positioning information according to the coordinate point offset to obtain first estimated positioning information.
In one embodiment, the positioning unit is configured to:
screening out first foreground probabilities meeting the screening conditions of the first target objects from the first foreground probabilities;
determining the maximum pixel value in the pixel values corresponding to the screened at least one first foreground probability;
determining a coordinate value of an image coordinate point corresponding to the maximum pixel value;
and determining initial estimated positioning information according to the coordinate values.
In one embodiment, the estimating unit is configured to:
carrying out affine transformation on the image to be detected according to the first estimated positioning information and the second estimated positioning information to obtain an affine transformation graph;
acquiring the image size of a first target object which is also output by the positioning model;
and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relation.
In one embodiment, the detection unit is configured to:
carrying out semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object;
respectively determining a second foreground probability that each pixel in the semantic segmentation map is a second target object;
Screening out second foreground probabilities meeting second target object screening conditions from the second foreground probabilities;
and obtaining an assembly state detection result according to the pixel corresponding to the screened second foreground probability.
In one embodiment, the detection unit is configured to:
determining the image area of the pixel corresponding to the screened second foreground probability;
if the area of the image area is higher than the preset area threshold, judging whether the assembly state between the first target object and the second target object is the installation state or not according to the pixel corresponding to the screened second foreground probability.
In one embodiment, the first target object is a piston and the second target object is a piston ring.
In one aspect, an electronic device is provided that includes a processor and a memory storing computer readable instructions that, when executed by the processor, perform the steps of a method as provided in various alternative implementations of any of the assembly state detection described above.
In one aspect, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of a method as provided in various alternative implementations of any of the assembly state detection described above.
In one aspect, there is provided a computer program product which, when run on a computer, causes the computer to perform the steps of the method provided in various alternative implementations of any of the assembly condition detection described above.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for detecting an assembled state according to an embodiment of the present application;
fig. 2 is an exemplary diagram of an image to be detected according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an assembled state detecting system according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a positioning module according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a detection module according to an embodiment of the present application;
FIG. 6 is a detailed flow chart of a method for detecting assembly status according to an embodiment of the present application;
Fig. 7 is a schematic structural diagram of an apparatus for detecting an assembled state according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present application without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
Some of the terms referred to in the embodiments of the present application will be described first to facilitate understanding by those skilled in the art.
Terminal equipment: the mobile terminal, stationary terminal or portable terminal may be, for example, a mobile handset, a site, a unit, a device, a multimedia computer, a multimedia tablet, an internet node, a communicator, a desktop computer, a laptop computer, a notebook computer, a netbook computer, a tablet computer, a personal communications system device, a personal navigation device, a personal digital assistant, an audio/video player, a digital camera/camcorder, a positioning device, a television receiver, a radio broadcast receiver, an electronic book device, a game device, or any combination thereof, including the accessories and peripherals of these devices, or any combination thereof. It is also contemplated that the terminal device can support any type of interface (e.g., wearable device) for the user, etc.
And (3) a server: the cloud server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, basic cloud computing services such as big data and artificial intelligent platforms and the like.
Deep learning algorithm: is one of machine learning, which is a must-pass path for implementing artificial intelligence. The deep learning concept is derived from the research of an artificial neural network, and a multi-layer sensor with a plurality of hidden layers is a deep learning structure. Deep learning forms more abstract high-level representation attribute categories or features by combining low-level features to discover distributed feature representations of data. The motivation for studying deep learning is to build a neural network that simulates the human brain for analysis learning, which mimics the mechanisms of the human brain to interpret data, such as images, sounds, text, and the like.
Affine transformation: also called affine mapping, it means that in geometry, one vector space is transformed linearly once and translated to another vector space.
In order to improve accuracy of assembly state detection when detecting an assembly state between target objects, the embodiment of the application provides a method, a device, electronic equipment and a storage medium for assembly state detection.
In this embodiment of the present application, the execution body may be an electronic device, and optionally, the electronic device may be a server or a terminal device.
Referring to fig. 1, a flowchart of a method for detecting an assembly state according to an embodiment of the present application is shown, where a specific implementation flow of the method is as follows:
Step 100: inputting the image to be detected into a pre-trained positioning model to obtain the key point positioning information of the first target object in the image to be detected.
Specifically, an image to be detected is input into a pre-trained positioning model, and first estimated positioning information of a first target object and second estimated positioning information of a second target object in the image to be detected are obtained.
The positioning model is constructed based on a deep learning algorithm and is used for positioning the first target object and the second target object. The key point positioning information comprises first estimated positioning information and second estimated positioning information.
When determining the first estimated positioning information of the first target object and the second estimated positioning information of the second target object in the image to be detected by adopting a positioning model, the following steps can be adopted:
s1001: and extracting the characteristics of the image to be detected to obtain a characteristic image.
Specifically, through a positioning model, carrying out normalization pretreatment on an image to be detected to obtain a normalized image, and carrying out convolution, pooling and nonlinear treatment on the normalized image to obtain a characteristic image.
In this way, the first target object and the second target object can be positioned in a subsequent step by means of the feature images.
S1002: and performing image processing on the characteristic image aiming at the first target object to obtain a thermodynamic diagram corresponding to the first target object.
Specifically, a first foreground probability that each pixel in the feature image is a pixel corresponding to the first target object is determined respectively, a first background probability that each pixel is not a pixel corresponding to the first target object is determined, and a thermodynamic diagram corresponding to the first target object is generated according to the first foreground probability and/or the first background probability of each pixel.
The sum of the first foreground probability and the first background probability is 1.
In one embodiment, if different pixel colors are set for different first foreground probabilities in advance, a thermodynamic diagram may be generated according to the pixel colors corresponding to the first foreground probabilities of the pixels in the feature image.
The first foreground probability of each pixel in the thermodynamic diagram is the first foreground probability of the corresponding pixel in the feature image.
Thus, the pixel area where the first target object is located can be determined by the color distribution in the thermodynamic diagram.
S1003: according to the thermodynamic diagram, first estimated positioning information of a first designated key point of the first target object is determined.
In this embodiment, only the first target object is taken as the piston, and the first designated key point is taken as the center point of the piston as an example. In practical application, the first target object may be any object, and the first designated key point may be any key point in the first target object, which is not limited herein.
When determining the first estimated positioning information of the first designated key point of the first target object, the following steps may be adopted:
the first step: and determining initial estimated positioning information of the first designated key point according to the first foreground probability and the pixel value corresponding to each pixel.
Specifically, first foreground probabilities meeting first target object screening conditions are screened out from the first foreground probabilities, the maximum pixel value in pixel values corresponding to at least one screened first foreground probability, namely the peak value of the thermodynamic diagram, is determined, the coordinate value of an image coordinate point corresponding to the maximum pixel value is determined, and initial estimated positioning information is determined according to the coordinate value.
The first target object screening condition may be that the first foreground probability is greater than a first preset probability threshold, and in practical application, both the first target object screening condition and the first preset probability threshold may be set according to a practical application scenario, for example, the first preset probability threshold may be 0.5, which is not limited herein.
In one embodiment, when screening the first foreground probabilities meeting the first target object screening condition from the first foreground probabilities, the following steps may be adopted:
And respectively judging whether each first foreground probability is larger than a first preset probability threshold value, and if all the first foreground probabilities are not larger than the first preset probability threshold value, determining that no first target object exists in the image to be detected. If the first foreground probability is larger than a first preset probability threshold value, determining that a first target object exists in the image to be detected, and screening the first foreground probability larger than the first preset probability threshold value from the first foreground probabilities.
In one embodiment, a first foreground channel and a first background channel of the thermodynamic diagram are also obtained according to a first foreground probability and a first background probability corresponding to each pixel, respectively.
The first foreground channel is the first foreground probability of each pixel, and the first background channel is the first background probability of each pixel.
Wherein, the size of the characteristic diagram and the thermodynamic diagram can be expressed as (2 XH/2 XW/2), and 2 is the channel number of the thermodynamic diagram. H is the high of the thermodynamic diagram and W is the wide of the thermodynamic diagram. The initial pre-estimated location information may be represented as (H) x ,H y )。H x For the initial estimated coordinates of the first designated key point on the x-axis, H y And (5) initially estimating coordinates of the first appointed key point on the y axis.
In this way, whether the first target object exists in the image to be detected or not can be detected according to the first foreground probability of each pixel, and the first target object in the image to be detected is initially positioned.
And a second step of: and determining the coordinate point offset of the first designated key point according to the characteristic image.
Specifically, according to the feature images, coordinate point offset amounts of the first specified key points in the x axis and the y axis are respectively determined.
Wherein the coordinate point Offset amount can be expressed as (Offset x ,Offset y )。
And a third step of: and adjusting the initial estimated positioning information according to the coordinate point offset to obtain first estimated positioning information.
Specifically, the sum of the coordinate point offset and the initial estimated positioning information is determined as first estimated positioning information.
In one embodiment, the first pre-estimated positioning information (Q x ,Q y )=(Offset x ,Offset y )+(H x ,H y )。
Wherein x is the estimated positioning coordinate of the first specified key point on the x axis, and y is the estimated positioning coordinate of the first specified key point on the y axis.
Thus, the first designated key point can be accurately positioned.
S1004: and determining the key point offset of the second designated key point of the first target object according to the characteristic image.
Specifically, the key point offset is an offset between a specified number of second specified key points of the first target object and the first specified key points, respectively.
In one embodiment, the center point of the piston is set as a first designated key point, and the designated number is 2, so that according to the characteristic image, a second designated key point of the piston and first estimated positioning information (Q x ,Q y ) The key point offset (P1) △x, P1 △y ) And determining another second designated key point in the piston and the first estimated positioning information (Q x ,Q y ) The key point offset (P2) △x, P2 △y )。
In this embodiment, only the specific number is 2, and the second specific key point is a specific position in the piston, which is described as an example, in practical application, the specific number may be any integer greater than 1, and the second specific key point may be any position in the first target object, which is not limited herein.
In this way, the offset between the second specified keypoint and the first specified keypoint can be determined.
S1005: and determining second estimated positioning information according to the key point offset and the first estimated positioning information.
Specifically, the sum of the key point offset and the first estimated positioning information is determined to be the second estimated positioning information.
Implementation ofIn this way, if the specified number is 2, the second estimated positioning information (P1) of a second specified key point can be determined x ,P1 y )=(P1 △x, P1 △y )+(Q x ,Q y ) Second estimated positioning information (P2 x ,P2 y )=(P2 △x, P2 △y )+(Q x ,Q y )。
Thus, the second designated key point can be positioned according to the specific position of the first designated key point.
Step 101: and estimating a second target object image area of the second target object according to the positioning information of the key points and the relative position relation between the first target object and the second target object.
Specifically, when step 101 is performed, the following steps may be adopted:
s1011: and carrying out affine transformation on the image to be detected according to the first estimated positioning information and the second estimated positioning information to obtain an affine transformation graph.
In one embodiment, the specified number is 2, and the first estimated positioning information (Q x ,Q y ) Second estimated location information (P1) of a second designated key point x ,P1 y ) And second estimated positioning information (P2) of another second specified key point x ,P2 y ) And carrying out affine transformation on the image to be detected to obtain an affine transformation graph.
Further, it may be determined whether the image to be detected is a forward image of the first target object, and if so, S1011 may not be executed.
In practical application, whether affine transformation is needed for the image to be detected may be performed according to the practical application scenario, which is not limited herein.
S1012: the image size of the first target object that is also output by the positioning model is obtained.
Specifically, the image to be detected is input to the positioning model, and the image size of the first target object can also be output.
S1013: and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relation.
Specifically, according to the thermodynamic diagram, determining the image size of the first target object, acquiring the relative position relationship between the first target object and the second target object, and estimating the image area of the second target object corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relationship.
Thus, the image area of the second target object in the affine transformation map can be estimated.
Step 102: and cutting the image to be detected according to the second target object image area to obtain an assembly state detection area diagram.
Specifically, the affine transformation map is cut according to the second target object image area, and an assembly state detection area map is obtained.
This is because in the time production line installation environment, the first target object (e.g., the piston) is non-stationary, and the angle of view of the photographing device (e.g., the camera) is varied at multiple angles, so that the first target object is corrected and cut to obtain an effective assembly area of the second target object, and thus the accuracy of the second target object segmentation can be improved in the subsequent second target object segmentation step.
Step 103: and inputting the assembly state detection area diagram into a pre-trained assembly state detection model to obtain an assembly state detection result.
Specifically, the assembly state detection model is constructed based on a deep learning algorithm. The assembly state detection result is used for representing the assembly state between the first target object and the second target object.
Wherein, when the detection model is adopted to execute step 103, the following steps can be adopted:
s1031: and carrying out semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object.
Since the assembly state detection result can be obtained by detecting whether the designated feature exists in the assembly region of the second target object when the assembly state between the second target object and the first target object is detected, in the embodiment of the present application, the partial region map of the assembly region of the second target object, that is, the semantic segmentation map, is obtained by means of semantic segmentation, so that in the subsequent detection step, the assembly state detection can be performed through the semantic segmentation map.
For example, referring to fig. 2, an exemplary diagram of an image to be detected is provided in an embodiment of the present application. The image to be detected comprises a piston (namely a first target object) and a piston ring (namely a second target object), the center point of the piston is used as a first appointed key point, the appointed number is set to be 2, namely two second appointed key points are set, and the appointed characteristic is set to be a gear, so that the image to be detected is corrected and cut according to the positioning results of the first appointed key point and the two second appointed key points to obtain an assembly state detection area diagram, and the semantic segmentation diagram in the assembly state detection area diagram is obtained in a semantic segmentation mode, so that in the subsequent step, whether the assembly state between the piston and the piston ring is an installation state can be judged only by detecting whether the gear exists in the assembly area of the piston ring.
Thus, the image range of the image processing in the subsequent step and the data amount at the time of the image processing can be reduced, and the accuracy of the fitting state detection can be improved.
S1032: and respectively determining a second foreground probability that each pixel in the semantic segmentation map is a second target object.
S1033: and screening out the second foreground probability meeting the screening condition of the second target object from the second foreground probabilities.
S1034: and obtaining an assembly state detection result according to the pixel corresponding to the screened second foreground probability.
Specifically, whether the designated features of the second target object exist or not is judged according to the pixels corresponding to the screened second foreground probability, if yes, an assembly state detection result representing installation is obtained, and otherwise, an assembly state detection result representing uninstalled is obtained.
For example, if the first target object is a piston, the second target object is a piston ring, and the designated feature is a gear, judging whether the gear exists according to the pixel corresponding to the screened second foreground probability, if yes, obtaining an assembly state detection result representing installation, otherwise, obtaining an assembly state detection result representing uninstalled.
Further, when S1034 is executed, the following steps may be adopted:
Specifically, determining the image area of the pixel corresponding to the screened second foreground probability, if the image area is higher than the preset area threshold, judging whether the assembly state between the first target object and the second target object is an installation state according to the pixel corresponding to the screened second foreground probability, and obtaining an assembly state detection result, otherwise, discarding the semantic segmentation graph.
The method is characterized in that the effective image area is too small due to factors such as shielding and the like, so that the detection result is inaccurate, and therefore, in order to ensure the accuracy of the detection result, the semantic segmentation map with the image area lower than a preset area threshold value can be removed.
In practical application, the preset area threshold may be set according to a practical application scenario, which is not limited herein.
Further, before the assembly state detection, the positioning model and the assembly state detection model are trained respectively to obtain a trained positioning model and an assembly state detection model.
In one embodiment, a plurality of image samples are input to a positioning model, and first estimated positioning information of a first specified key point of a first target object and second estimated positioning information of a specified number of second specified key points of the first target object are output. Then, determining a first difference value between the first actual positioning information of the first designated key point and the corresponding first estimated positioning information, determining a second difference value between the second actual positioning information of the second designated key point and the corresponding second estimated positioning information, and adjusting model parameters in the positioning model according to each first difference value and each second difference value to obtain a trained positioning model.
In one embodiment, a plurality of image samples are input into an assembly state detection model, a predicted assembly state detection result is output, and model parameters in the assembly state detection model are adjusted according to the predicted assembly state detection result and a corresponding actual assembly state detection result to obtain a trained assembly state detection model.
When the positioning model and the assembly state detection model are trained, some original attribute data in the image, such as illumination, ambiguity and other attributes, can be randomly modified in the image sample to improve the robustness and accuracy of the model.
According to the embodiment of the application, the positioning model constructed based on the deep learning algorithm is adopted, the first target object and the second target object can be positioned, the image to be detected is corrected and cut according to the positioning result, an effective assembly area of the second target object is obtained, the assembly state detection model constructed based on the deep learning algorithm is adopted, the assembly state detection area diagram after correction and cutting is subjected to semantic segmentation and assembly state detection, the assembly state detection result is obtained, the influence of the change of external factors such as illumination, shielding and blurring in the environment on the assembly state detection is effectively reduced, the robustness of the assembly state detection is improved, and compared with the traditional assembly state detection mode, the accuracy of the assembly state detection is obviously improved.
Referring to fig. 3, an architecture diagram of an assembly state detection system according to an embodiment of the present application includes a positioning module, a correction module, and a segmentation module.
The positioning module is used for: acquiring an acquired image to be detected, inputting the image to be detected into a pre-trained positioning model, and acquiring first estimated positioning information and second estimated positioning information of a first target object in the image to be detected.
Fig. 4 is a schematic structural diagram of a positioning module according to an embodiment of the present application. The positioning module comprises a feature extraction module, a first sub-module, a second sub-module and a third sub-module.
Wherein, the feature extraction module is used for: and extracting the characteristics of the image to be detected to obtain a characteristic image. The first submodule is used for: obtaining a thermodynamic diagram according to the characteristic image, and obtaining initial estimated positioning information of a first designated key point of the first target object according to the obtained thermodynamic diagram.
A second sub-module: and the coordinate point offset of the first appointed key point is determined according to the characteristic image, and the initial estimated positioning information is subjected to the coordinate point offset to obtain first estimated positioning information.
And a third sub-module: and the key point offset is used for determining the key point offset of the second designated key point according to the characteristic image, and determining second estimated positioning information according to the key point offset and the first estimated positioning information.
The correction module is used for: and carrying out image correction and image cutting on the image to be detected according to the first estimated positioning information and the second estimated positioning information to obtain an assembly state detection area diagram corresponding to the second target object.
The detection module is used for: performing semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object, and performing assembly state detection according to the semantic segmentation graph to obtain an assembly state detection result.
Fig. 5 is a schematic structural diagram of a detection module according to an embodiment of the present application. The detection module comprises a semantic segmentation module and an assembly state detection module.
The semantic segmentation module is used for: and carrying out semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object.
The assembly state detection module is used for: and detecting the assembly state according to the semantic segmentation graph to obtain an assembly state detection result.
Referring to fig. 6, a detailed flowchart of a method for detecting an assembly state according to an embodiment of the present application is shown, where a specific implementation flow of the method is as follows:
step 600: and extracting the characteristics of the image to be detected to obtain a characteristic image.
Step 601: and performing image processing on the characteristic image aiming at the first target object to obtain a thermodynamic diagram corresponding to the first target object.
Step 602: first pre-estimated location information and image size of a first designated key point of a first target object are determined based on the thermodynamic diagram.
Step 603: and determining the key point offset of the second designated key point of the first target object according to the characteristic image.
Step 604: and determining second estimated positioning information according to the key point offset of the second designated key point and the first estimated positioning information.
Step 605: and carrying out affine transformation on the image to be detected according to the first estimated positioning information and the second estimated positioning information to obtain an affine transformation graph.
Step 606: and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relationship between the first target object and the second target object.
Step 607: and cutting the affine transformation map according to the second target object image area to obtain an assembly state detection area map.
Step 608: and carrying out semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object.
Step 609: and respectively determining a second foreground probability that each pixel in the semantic segmentation map is a second target object.
Step 610: and screening out the second foreground probability meeting the screening condition of the second target object from the second foreground probabilities.
Step 611: and obtaining an assembly state detection result according to the pixel corresponding to the screened second foreground probability.
Specifically, when steps 600-611 are performed, specific steps are referred to above steps 100-103, and are not described herein.
In the embodiment of the application, the positioning model constructed based on the deep learning algorithm is adopted to position the first target object and the second target object, the image to be detected is corrected and cut according to the positioning result, an effective assembly area of the second target object is obtained, the assembly state detection model constructed based on the deep learning algorithm is adopted to carry out semantic segmentation and assembly state detection on the corrected and cut assembly state detection area diagram, an assembly state detection result is obtained, the influence of the change of external factors such as illumination, shielding and blurring in the environment on the assembly state detection is effectively reduced, the robustness of the assembly state detection is improved, and compared with the traditional assembly state detection mode, the accuracy of the assembly state detection is obviously improved.
Based on the same inventive concept, the embodiment of the present application further provides an assembly state detection device, and since the principle of solving the problem by using the device and the equipment is similar to that of an assembly state detection method, the implementation of the device can refer to the implementation of the method, and the repetition is omitted.
Referring to fig. 7, a schematic structural diagram of an apparatus for detecting an assembly state according to an embodiment of the present application includes:
the positioning unit 701 is configured to input an image to be detected into a pre-trained positioning model to obtain positioning information of a key point of a first target object in the image to be detected, where the positioning model is constructed based on a deep learning algorithm;
a pre-estimating unit 702, configured to pre-estimate a second target object image area of the second target object according to the positioning information of the key point and the relative positional relationship between the first target object and the second target object;
a clipping unit 703, configured to clip the image to be detected according to the second target object image area, to obtain an assembly state detection area map;
and a detection unit 704, configured to input the assembly state detection area diagram to a pre-trained assembly state detection model, to obtain an assembly state detection result, where the assembly state detection model is constructed based on a deep learning algorithm, and the assembly state detection result is used to represent an assembly state between the first target object and the second target object.
In one embodiment, the positioning information of the key point includes first estimated positioning information and second estimated positioning information, and the positioning unit 701 is configured to:
extracting features of the image to be detected to obtain a feature image;
performing image processing on the characteristic image aiming at the first target object to obtain a thermodynamic diagram corresponding to the first target object;
determining first estimated positioning information of a first designated key point of a first target object according to the thermodynamic diagram;
determining the key point offset of the second designated key points of the first target object according to the characteristic image, wherein the key point offset is the offset between the designated number of the second designated key points of the first target object and the first designated key points respectively;
and determining second estimated positioning information according to the key point offset and the first estimated positioning information.
In one embodiment, the positioning unit 701 is configured to:
respectively determining a first foreground probability of each pixel in the thermodynamic diagram as a pixel corresponding to a first target object;
determining initial estimated positioning information of a first designated key point according to the first foreground probability and the pixel value corresponding to each pixel;
determining coordinate point offset of a first designated key point according to the characteristic image;
And adjusting the initial estimated positioning information according to the coordinate point offset to obtain first estimated positioning information.
In one embodiment, the positioning unit 701 is configured to:
screening out first foreground probabilities meeting the screening conditions of the first target objects from the first foreground probabilities;
determining the maximum pixel value in the pixel values corresponding to the screened at least one first foreground probability;
determining a coordinate value of an image coordinate point corresponding to the maximum pixel value;
and determining initial estimated positioning information according to the coordinate values.
In one embodiment, the estimating unit 702 is configured to:
carrying out affine transformation on the image to be detected according to the first estimated positioning information and the second estimated positioning information to obtain an affine transformation graph;
acquiring the image size of a first target object which is also output by the positioning model;
and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relation.
In one embodiment, the detection unit 704 is configured to:
carrying out semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object;
respectively determining a second foreground probability that each pixel in the semantic segmentation map is a second target object;
Screening out second foreground probabilities meeting second target object screening conditions from the second foreground probabilities;
and obtaining an assembly state detection result according to the pixel corresponding to the screened second foreground probability.
In one embodiment, the detection unit 704 is configured to:
determining the image area of the pixel corresponding to the screened second foreground probability;
if the area of the image area is higher than the preset area threshold, judging whether the assembly state between the first target object and the second target object is the installation state or not according to the pixel corresponding to the screened second foreground probability.
In one embodiment, the first target object is a piston and the second target object is a piston ring.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
The electronic device 8000 includes: the processor 8010 and the memory 8020, and optionally, may also include a power supply 8030, a display unit 708040, and an input unit 708050.
The processor 8010 is a control center of the electronic device 8000, connects various components using various interfaces and wires, and performs various functions of the electronic device 8000 by running or executing software programs and/or data stored in the memory 8020, thereby monitoring the electronic device 8000 as a whole.
In the embodiment of the present application, the processor 8010 executes the method for detecting the assembly state provided by the embodiment shown in fig. 1 when calling the computer program stored in the memory 8020.
Optionally, the processor 8010 may include one or more processing units; preferably, the processor 8010 may integrate an application processor and a modem processor, wherein the application processor primarily handles operating systems, user interfaces, applications, etc., and the modem processor primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 8010. In some embodiments, the processor, memory, may be implemented on a single chip, and in some embodiments, they may be implemented separately on separate chips.
The memory 8020 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, various applications, and the like; the storage data area may store data created according to the use of the electronic device 8000, and the like. In addition, the memory 8020 can include high-speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device, and the like.
The electronic device 8000 also includes a power supply 8030 (e.g., a battery) that provides power to the various components, which may be logically coupled to the processor 8010 via a power management system, such that the power management system may be used to manage charging, discharging, and power consumption.
The display unit 8040 may be used to display information input by a user or information provided to the user, various menus of the electronic device 8000, and the like, and in the embodiment of the present invention, is mainly used to display a display interface of each application in the electronic device 8000 and objects such as text and pictures displayed in the display interface. The display unit 8040 may include a display panel 8041. The display panel 8041 may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like.
The input unit 8050 may be used to receive information such as numbers or characters input by a user. The input unit 8050 may include a touch panel 8051 and other input devices 8052. Among other things, the touch panel 8051, also referred to as a touch screen, may collect touch operations thereon or thereabout by a user (e.g., operations of the user on the touch panel 8051 or thereabout using any suitable object or accessory such as a finger, stylus, etc.).
Specifically, the touch panel 8051 may detect a touch operation by a user, detect signals resulting from the touch operation, convert the signals into coordinates of contacts, send the coordinates of contacts to the processor 8010, and receive and execute a command sent from the processor 8010. In addition, the touch panel 8051 may be implemented in various types such as resistive, capacitive, infrared, and surface acoustic wave. Other input devices 8052 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, on-off keys, etc.), a trackball, mouse, joystick, etc.
Of course, the touch panel 8051 may cover the display panel 8041, and when the touch panel 8051 detects a touch operation thereon or thereabout, the touch panel is transmitted to the processor 8010 to determine the type of touch event, and then the processor 8010 provides a corresponding visual output on the display panel 8041 according to the type of touch event. Although in fig. 8, the touch panel 8051 and the display panel 8041 are two separate components to implement the input and output functions of the electronic device 8000, in some embodiments, the touch panel 8051 may be integrated with the display panel 8041 to implement the input and output functions of the electronic device 8000.
The electronic device 8000 may also include one or more sensors, such as a pressure sensor, a gravitational acceleration sensor, a proximity light sensor, and the like. Of course, the electronic device 8000 may also include other components such as a camera, as needed in a particular application, and as such are not the components that are of importance in embodiments of the present application, they are not shown in fig. 8 and will not be described in detail.
It will be appreciated by those skilled in the art that fig. 8 is merely an example of an electronic device and is not meant to be limiting and that more or fewer components than shown may be included or certain components may be combined or different components.
In an embodiment of the present application, a readable storage medium has stored thereon a computer program which, when executed by a processor, enables a communication device to perform the steps of the above-described embodiments.
For convenience of description, the above parts are described as being functionally divided into modules (or units) respectively. Of course, the functions of each module (or unit) may be implemented in the same piece or pieces of software or hardware when implementing the present application.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (9)

1. A method of assembly condition detection, comprising:
inputting an image to be detected into a pre-trained positioning model to obtain key point positioning information of a first target object in the image to be detected, wherein the positioning model is constructed based on a deep learning algorithm;
estimating a second target object image area of the second target object according to the key point positioning information and the relative position relation between the first target object and the second target object;
cutting the image to be detected according to the second target object image area to obtain an assembly state detection area diagram;
inputting the assembly state detection area diagram into a pre-trained assembly state detection model to obtain an assembly state detection result, wherein the assembly state detection model is constructed based on a deep learning algorithm, and the assembly state detection result is used for representing the assembly state between the first target object and the second target object;
the estimating a second target object image area of the second target object according to the positioning information of the key point and the relative position relationship between the first target object and the second target object includes:
Carrying out affine transformation on the image to be detected according to the first estimated positioning information of the first target object and the second estimated positioning information of the second target object to obtain an affine transformation graph;
the key point positioning information comprises first estimated positioning information and second estimated positioning information, the second estimated positioning information is obtained by adding key point offset and the first estimated positioning information, and the first estimated positioning information is obtained by adding coordinate point offset of a first designated key point and initial estimated positioning information;
acquiring the image size of the first target object output by the positioning model;
and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relation.
2. The method according to claim 1, wherein the inputting the image to be detected into a pre-trained positioning model to obtain the key point positioning information of the first target object in the image to be detected includes:
extracting the characteristics of the image to be detected to obtain a characteristic image;
Performing image processing on the characteristic image aiming at the first target object to obtain a thermodynamic diagram corresponding to the first target object;
determining the first estimated positioning information of a first designated key point of the first target object according to the thermodynamic diagram;
determining the key point offset of the second designated key points of the first target object according to the characteristic image, wherein the key point offset is the offset between the designated number of the second designated key points of the first target object and the first designated key points respectively;
and determining the second estimated positioning information according to the key point offset and the first estimated positioning information.
3. The method of claim 2, wherein determining the first estimated location information for the first specified keypoint of the first target object from the thermodynamic diagram comprises:
respectively determining a first foreground probability that each pixel in the thermodynamic diagram is a pixel corresponding to the first target object;
determining initial estimated positioning information of the first designated key point according to the first foreground probability and the pixel value corresponding to each pixel;
determining coordinate point offset of the first designated key point according to the characteristic image;
And adjusting the initial estimated positioning information according to the coordinate point offset to obtain the first estimated positioning information.
4. The method of claim 3, wherein determining the initial pre-estimated location information for the first specified keypoint based on the first foreground probability and the pixel value for each pixel comprises:
screening out first foreground probabilities meeting the screening conditions of the first target objects from the first foreground probabilities;
determining the maximum pixel value in the pixel values corresponding to the screened at least one first foreground probability;
determining coordinate values of image coordinate points corresponding to the maximum pixel values;
and determining the initial estimated positioning information according to the coordinate values.
5. The method according to any one of claims 1 to 4, wherein inputting the assembly state detection area map to a pre-trained assembly state detection model to obtain an assembly state detection result includes:
performing semantic segmentation on the assembly state detection region graph to obtain a semantic segmentation graph of the second target object;
respectively determining a second foreground probability that each pixel in the semantic segmentation map is a second target object;
Screening out second foreground probabilities meeting second target object screening conditions from the second foreground probabilities;
and obtaining the assembly state detection result according to the pixel corresponding to the screened second foreground probability.
6. The method according to claim 5, wherein the obtaining the assembly state detection result according to the pixel corresponding to the screened second foreground probability includes:
determining the image area of the pixel corresponding to the screened second foreground probability;
if the area of the image area is higher than the preset area threshold, judging whether the assembly state between the first target object and the second target object is the installation state or not according to the pixel corresponding to the screened second foreground probability.
7. An apparatus for detecting an assembled condition, comprising:
the positioning unit is used for inputting an image to be detected into a pre-trained positioning model to obtain the key point positioning information of a first target object in the image to be detected, wherein the positioning model is constructed based on a deep learning algorithm;
the estimating unit is used for estimating a second target object image area of the second target object according to the key point positioning information and the relative position relation between the first target object and the second target object;
The clipping unit is used for clipping the image to be detected according to the second target object image area to obtain an assembly state detection area diagram;
the detection unit is used for inputting the assembly state detection area diagram into a pre-trained assembly state detection model to obtain an assembly state detection result, wherein the assembly state detection model is constructed based on a deep learning algorithm, and the assembly state detection result is used for representing the assembly state between the first target object and the second target object;
wherein, the estimation unit is further used for:
carrying out affine transformation on the image to be detected according to the first estimated positioning information of the first target object and the second estimated positioning information of the second target object to obtain an affine transformation graph;
the key point positioning information comprises first estimated positioning information and second estimated positioning information, the second estimated positioning information is obtained by adding key point offset and the first estimated positioning information, and the first estimated positioning information is obtained by adding coordinate point offset of a first designated key point and initial estimated positioning information;
Acquiring the image size of the first target object output by the positioning model;
and estimating a second target object image area corresponding to the second target object according to the first estimated positioning information, the second estimated positioning information, the image size and the relative position relation.
8. An electronic device comprising a processor and a memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-6.
9. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method of any of claims 1-6.
CN202110996297.1A 2021-08-27 2021-08-27 Method and device for detecting assembly state, electronic equipment and storage medium Active CN113706506B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110996297.1A CN113706506B (en) 2021-08-27 2021-08-27 Method and device for detecting assembly state, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110996297.1A CN113706506B (en) 2021-08-27 2021-08-27 Method and device for detecting assembly state, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113706506A CN113706506A (en) 2021-11-26
CN113706506B true CN113706506B (en) 2023-07-28

Family

ID=78656076

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110996297.1A Active CN113706506B (en) 2021-08-27 2021-08-27 Method and device for detecting assembly state, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113706506B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016396A (en) * 2017-04-11 2017-08-04 广州市华颉电子科技有限公司 A kind of assembling connecting piece characteristics of image deep learning and recognition methods
CN111044522A (en) * 2019-12-14 2020-04-21 中国科学院深圳先进技术研究院 Defect detection method and device and terminal equipment

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110197189A (en) * 2018-02-27 2019-09-03 中北大学 A kind of fuse assembly accuracy detection method and equipment
JP7015001B2 (en) * 2018-03-14 2022-02-02 オムロン株式会社 Defect inspection equipment, defect inspection methods, and their programs
WO2020176908A1 (en) * 2019-02-28 2020-09-03 Nanotronics Imaging, Inc. Assembly error correction for assembly lines
CN110738164B (en) * 2019-10-12 2022-08-12 北京猎户星空科技有限公司 Part abnormity detection method, model training method and device
CN111126416A (en) * 2019-12-12 2020-05-08 创新奇智(重庆)科技有限公司 Engine chain wheel identification system and identification method based on key point detection
CN111402228B (en) * 2020-03-13 2021-05-07 腾讯科技(深圳)有限公司 Image detection method, device and computer readable storage medium
CN111428373A (en) * 2020-03-30 2020-07-17 苏州惟信易量智能科技有限公司 Product assembly quality detection method, device, equipment and storage medium
CN112784853B (en) * 2020-12-24 2022-05-03 深兰人工智能芯片研究院(江苏)有限公司 Terminal connection state detection method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107016396A (en) * 2017-04-11 2017-08-04 广州市华颉电子科技有限公司 A kind of assembling connecting piece characteristics of image deep learning and recognition methods
CN111044522A (en) * 2019-12-14 2020-04-21 中国科学院深圳先进技术研究院 Defect detection method and device and terminal equipment

Also Published As

Publication number Publication date
CN113706506A (en) 2021-11-26

Similar Documents

Publication Publication Date Title
CN111476306B (en) Object detection method, device, equipment and storage medium based on artificial intelligence
CN112162930B (en) Control identification method, related device, equipment and storage medium
CN109189879B (en) Electronic book display method and device
CN111209423B (en) Image management method and device based on electronic album and storage medium
CN116168038B (en) Image reproduction detection method and device, electronic equipment and storage medium
US11681409B2 (en) Systems and methods for augmented or mixed reality writing
CN112163428A (en) Semantic tag acquisition method and device, node equipment and storage medium
CN111984803B (en) Multimedia resource processing method and device, computer equipment and storage medium
CN111598149B (en) Loop detection method based on attention mechanism
CN108197105B (en) Natural language processing method, device, storage medium and electronic equipment
CN113596601A (en) Video picture positioning method, related device, equipment and storage medium
CN113220848A (en) Automatic question answering method and device for man-machine interaction and intelligent equipment
CN112995757B (en) Video clipping method and device
CN111797873A (en) Scene recognition method and device, storage medium and electronic equipment
CN113706506B (en) Method and device for detecting assembly state, electronic equipment and storage medium
CN108052506B (en) Natural language processing method, device, storage medium and electronic equipment
CN116958715A (en) Method and device for detecting hand key points and storage medium
CN114937027B (en) Fan blade defect detection method and device, electronic equipment and storage medium
CN111797856A (en) Modeling method, modeling device, storage medium and electronic equipment
CN116797954A (en) Image processing method, device, electronic equipment and storage medium
US20210216584A1 (en) Mobile device event control with topographical analysis of digital images inventors
CN113705722B (en) Method, device, equipment and medium for identifying operating system version
CN110119383A (en) A kind of file management method and terminal device
CN111797391A (en) High-risk process processing method and device, storage medium and electronic equipment
CN110909190B (en) Data searching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant