CN113435465A - Image processing and intelligent control method and equipment - Google Patents

Image processing and intelligent control method and equipment Download PDF

Info

Publication number
CN113435465A
CN113435465A CN202010203443.6A CN202010203443A CN113435465A CN 113435465 A CN113435465 A CN 113435465A CN 202010203443 A CN202010203443 A CN 202010203443A CN 113435465 A CN113435465 A CN 113435465A
Authority
CN
China
Prior art keywords
plane
pixel
standard
image
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010203443.6A
Other languages
Chinese (zh)
Inventor
梅佳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN202010203443.6A priority Critical patent/CN113435465A/en
Publication of CN113435465A publication Critical patent/CN113435465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method and intelligent control method and equipment, wherein the image processing method comprises the following steps: dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set. The embodiment of the invention improves the plane parameter estimation precision.

Description

Image processing and intelligent control method and equipment
Technical Field
The invention relates to the technical field of electronic equipment, in particular to an image processing method, an intelligent control method and equipment.
Background
With rapid development of science and technology, electronic devices are more and more widely applied to multifunctional control scenes based on planes, for example, in the automatic moving process of a robot, a wall surface in front of the robot needs to be distinguished to avoid obstacles, and on the basis, a plane where the wall surface is located needs to be determined in advance to perform intelligent control. Generally, the electronic device can acquire a front image first and identify a mid-plane of the image for intelligent control.
In the prior art, after an electronic device acquires an image, a machine learning model may be used to predict a plane parameter corresponding to each pixel in the image and an associated feature of the pixel and a corresponding plane region, and then, a plane parameter corresponding to each pixel and an associated feature of the corresponding plane region are used to perform spatial aggregation to obtain a plane parameter of a plane corresponding to each plane region. In general, the plane parameters of a plane may be defined in terms of the plane equation of the plane in the camera coordinate system. Taking x, y, and z axes of the camera coordinate system as an example, the plane equation may be Ax + By + Cz + D equal to 0, and the plane parameter may be A, B, C, D.
However, the pixel points and the associated features of the corresponding plane areas have corresponding spatial dimensions, when the plane parameters corresponding to the planes are predicted, the influence of the spatial dimensions is large, the influence of the change of the spatial dimensions on the calculation process of the plane parameters is large, and the plane parameters predicted by the method are not accurate enough and have poor precision.
Disclosure of Invention
In view of this, embodiments of the present invention provide an intelligent control method and apparatus, so as to solve the technical problem in the prior art that plane parameters of a plane in an image are not estimated accurately enough.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set;
determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set;
determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set;
and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
In a second aspect, an embodiment of the present invention provides an intelligent control method, including:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set;
determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set;
determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set;
obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set;
and intelligently controlling the electronic equipment for acquiring the image to be processed by utilizing the plane parameters respectively corresponding to at least one plane in the image to be processed.
In a third aspect, an embodiment of the present invention provides an image processing apparatus, including: a storage component and a processing component; the storage component is used for storing one or more computer instructions; the one or more computer instructions are invoked by the processing component;
the processing component is to:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; and obtaining the plane parameters of at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
In a fourth aspect, an embodiment of the present invention provides an intelligent control device, including: a storage component and a processing component; the storage component is used for storing one or more computer instructions; the one or more computer instructions are invoked by the processing component;
the processing component is to:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set; and intelligently controlling the electronic equipment for acquiring the image to be processed by utilizing the plane parameters respectively corresponding to at least one plane in the image to be processed.
According to the embodiment of the invention, the pixel points which are positioned on the same plane in the plurality of pixel points of the image to be processed are divided into the same pixel set so as to determine the plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set. Calculating the plane parameters corresponding to at least one pixel point in any pixel set respectively, and determining the plane parameters of the plane corresponding to the pixel set, so that the plane parameters corresponding to at least one pixel set respectively can be obtained as the respective plane parameters of at least one plane corresponding to the image to be processed. The pixel points belonging to the same plane in the multiple pixel points of the image to be processed are clustered, so that the pixel points of the same plane are divided into a pixel set, and the plane calculation parameters of the pixel points are added to the confirmation process of the plane parameters, so that the accuracy of plane estimation can be improved, and more accurate plane parameters can be obtained.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
FIG. 1 is a flowchart of an embodiment of an image processing method according to the present invention;
FIGS. 2 a-2 b are diagrams illustrating an example of image partitioning according to an embodiment of the present invention;
FIG. 3 is a flowchart of another embodiment of an image processing method according to the present invention;
FIG. 4 is a flowchart of another embodiment of an image processing method according to the present invention;
FIG. 5 is a flowchart of another embodiment of an image processing method according to the present invention;
fig. 6 is a flowchart of another embodiment of an intelligent control method according to an embodiment of the present invention;
FIGS. 7 a-7 b are diagrams illustrating an exemplary intelligent control method according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating another example of an intelligent control method according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an embodiment of an image processing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an embodiment of an intelligent control device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if," "if," as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a recognition," depending on the context. Similarly, the phrases "if determined" or "if identified (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when identified (a stated condition or event)" or "in response to an identification (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
The technical scheme of the embodiment of the application can be applied to intelligent control of electronic equipment, and the accurate plane parameters of each pixel point of the image to be processed are obtained through processing, so that the estimation of the respective plane parameters of at least one plane in the image to be processed is realized, and the accuracy of plane estimation is improved.
In the prior art, electronic devices such as robots, automatic driving devices, wearable devices and the like may have a need for image-based intelligent plane analysis, so as to perform intelligent analysis on a front plane and perform intelligent control through obtained plane parameters. When the image in front of the electronic device is intelligently analyzed, a common mode is that a machine learning model is used for estimating the plane parameters of each pixel point in the image and the associated characteristics of the pixel point and the plane where the pixel point is located, and then the plane parameters and the associated characteristics corresponding to each pixel point can be utilized to aggregate a plurality of pixel points in a plane space so as to obtain the plane parameters corresponding to a plurality of plane distributions. However, because the pixel point and the feature associated with the plane where the pixel point is located have a certain spatial dimension, the influence of the plane parameter is large in the plane parameter prediction process, and the plane parameter predicted by the method is not accurate enough.
In the embodiment of the application, the pixel points located on the same plane in the plurality of pixel points of the image to be processed are divided into the same pixel set, so that the plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set are determined. Calculating the plane parameters corresponding to at least one pixel point in any pixel set respectively, and determining the plane parameters of the plane corresponding to the pixel set, so that the plane parameters corresponding to at least one pixel set respectively can be obtained as the respective plane parameters of at least one plane corresponding to the image to be processed. The pixel points belonging to the same plane in the multiple pixel points of the image to be processed are clustered, so that the pixel points of the same plane are divided into a pixel set, and the plane calculation parameters of the pixel points are added to the confirmation process of the plane parameters, so that the accuracy of plane estimation can be improved, and more accurate plane parameters can be obtained.
The embodiments of the present application will be described in detail below with reference to the accompanying drawings.
As shown in fig. 1, a flowchart of an embodiment of an image processing method provided in an embodiment of the present application may include the following steps:
101: and dividing pixel points which are positioned on the same plane in a plurality of pixel points of the image to be processed into the same pixel set.
The image processing method provided by the embodiment of the application can be applied to electronic equipment or a server corresponding to the electronic equipment. The electronic device may include, for example: the robot, the vehicle-mounted equipment of autopilot car, wearable equipment, automatic positioning equipment etc. this application embodiment does not do too much to limit to the concrete type of electronic equipment. The server corresponding to the electronic device may implement communication with the electronic device based on a wired or wireless communication connection manner, and the server may specifically include: the embodiment of the application does not limit the specific types of the servers too much.
When the technical scheme provided by the application is applied to the electronic equipment, the image to be processed can be acquired by the electronic equipment. When the technical scheme provided by the application is applied to the server corresponding to the electronic equipment, the image to be processed can be collected by the electronic equipment and sent to the server.
Before dividing the pixels located on the same plane among the plurality of pixels of the image to be processed into the same pixel set, the method may further include: the method comprises the steps of collecting an image to be processed or receiving the image to be processed collected by electronic equipment.
The image to be processed can be acquired by a camera of the electronic equipment for acquiring the acquisition area of the electronic equipment. The acquisition area of the camera can be determined according to acquisition environment, acquisition range, acquisition angle and the like.
The image to be processed is composed of a plurality of pixel points. The method comprises the steps of dividing a plurality of pixel points of an image to be processed into at least one pixel set, wherein any one pixel set comprises at least one pixel point belonging to the same plane.
For example, the image to be processed is an indoor image, fig. 2a is an indoor image 201 to be processed, and fig. 2b is a plane in the indoor image 201 that can be divided into a plane 202, a plane 203, a plane 204, a plane 205, and a plane 206. The plane 202, the plane 203, the plane 204, the plane 205, or the plane 206 may include a plurality of pixel points, and the plane parameters of the plane equations corresponding to the pixel points in the same plane are the same.
102: and determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set.
Any pixel point can correspond to a corresponding plane calculation parameter, and the plane calculation parameter can be used for confirming data association between the pixel point and a plane. The pixel plane parameters of the pixel plane corresponding to the pixel point can be calculated through the plane calculation parameters of the pixel point.
Any one pixel set comprises at least one pixel point belonging to the same plane.
103: and determining plane parameters of a plane corresponding to the pixel set according to the plane calculation parameters respectively corresponding to at least one pixel point in any one pixel set.
And the plane corresponding to the pixel set is a plane parameter of the plane where at least one pixel point in the pixel set is located. One pixel set comprises at least one pixel point, each pixel point corresponds to a plane calculation parameter, and comprehensive calculation can be performed according to the plane calculation parameters respectively corresponding to the at least one pixel point in the same plane to obtain the plane parameters of the plane.
As a possible implementation manner, determining, according to a plane calculation parameter corresponding to at least one pixel point in any one pixel set, a plane parameter of a plane corresponding to the pixel set may include: calculating pixel plane parameters corresponding to at least one pixel point in any pixel set according to the plane calculation parameters corresponding to the at least one pixel point; and determining plane parameters of a plane corresponding to the pixel set according to the pixel plane parameters respectively corresponding to at least one pixel point.
The plane parameters of the plane include normal vectors (a, B, C) and a distance D.
104: and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to at least one pixel set.
The plane parameter determined by each pixel point in any pixel set is the plane parameter of the plane where the at least one pixel point in the pixel set is located, so that the plane parameter respectively corresponding to the at least one pixel set is the plane parameter respectively corresponding to the at least one plane in the image to be processed.
In some embodiments, the plane parameters respectively corresponding to at least one plane of the image to be processed are obtained by performing plane three-dimensional reconstruction on the image to be processed and three-dimensional plane parameters corresponding to each plane. The plane parameter corresponding to any plane of the image to be processed is located in the camera coordinate system, that is, the plane parameter of any plane is the plane parameter of the plane in the camera coordinate system. In the embodiment of the application, the pixels belonging to the same plane in the multiple pixels of the image to be processed are clustered, so that the pixels of the same plane are divided into a pixel set. And determining plane parameters of a plane corresponding to the pixel set by respectively corresponding plane calculation parameters of at least one pixel point in the same pixel set. And simultaneously using the plane calculation parameters of at least one pixel belonging to the same plane to estimate the plane parameters of the plane, so that the estimation of the plane parameters is more accurate to obtain more accurate plane parameters.
As shown in fig. 3, which is a flowchart of another embodiment of an image processing method provided in the embodiment of the present application, the method may include:
301: and dividing pixel points which are positioned on the same plane in a plurality of pixel points of the image to be processed into the same pixel set.
302: and determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set.
The unit normal vector of any pixel point is the unit normal vector corresponding to the camera coordinate system of the pixel plane where the pixel point is located. Any pixel point can correspond to a unit normal vector, and the unit normal vector can represent the normal vector of the pixel plane where the pixel point is located. The pixel planes corresponding to at least one pixel point in the same pixel set are mutually independent, the pixel points in the same pixel set are considered as pixel points belonging to one plane, but the pixel planes corresponding to at least one pixel point belonging to the same pixel set are independent, and under the optimal condition, the plane parameters of the pixel planes corresponding to at least one pixel point in the same pixel set are the same or the error is very small. The pixel plane of any pixel point can be considered to contain the pixel point, and a pixel plane and a unit normal vector of the pixel plane can be determined through one pixel point.
The pixel depth of any pixel point is an object point of the pixel point in the real world, namely the distance from a coordinate point of the object point in a world coordinate system to a plane where the pixel point is located in a camera coordinate system. The pixel depth can be used for confirming the distance between a shot object and a camera coordinate system of the camera, and the distance between the shot object and the camera coordinate system is added into an estimation scene of the plane parameters, so that the calculation precision is improved.
The unit normal vector of any pixel point can be used to determine the normal vector in the plane equation of the pixel plane in which the unit normal vector is located. For example, assume that the unit normal vector of a pixel is (a, B, C), the normal vector of the pixel plane where it is located is (A, B, C),
Figure BDA0002420150570000081
Figure BDA0002420150570000082
the normal vector is A, B, C of the plane equation Ax + By + Cz + D being 0, but the equation intercept D is unknown, and the equation intercept D needs to be obtained By using the pixel depth and the normal vector of the plane or the unit normal vector.
303: and determining plane parameters of a plane corresponding to the pixel set according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any pixel set.
Any one pixel set comprises at least one pixel point belonging to the same plane. Because one pixel set comprises at least one pixel point, each pixel point corresponds to one unit normal vector, and the pixel plane parameters of the pixel plane represented by the unit normal vectors of different pixel points, the unit normal vectors corresponding to the at least one pixel point respectively are synthesized to obtain the normal vector of the same plane where the at least one pixel point is located, and the plane parameters of the plane where the at least one pixel point is located can be expressed more accurately. In an optimal case, for example, when the pixel plane parameters of at least one pixel point in the same pixel set respectively corresponding to the pixel planes are the same, the plane corresponding to the pixel set is the pixel plane parameter.
The pixel points on the same plane can meet the plane equation corresponding to the plane parameters of the plane. The plane parameters corresponding to the plane can be determined according to the respective pixel depths of a plurality of pixel points corresponding to the same plane and the unit normal vector.
304: and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to at least one pixel set.
In the embodiment of the application, the pixels belonging to the same plane in the multiple pixels of the image to be processed are clustered, so that the pixels of the same plane are divided into a pixel set. And determining plane parameters of a plane corresponding to the pixel set by respectively corresponding pixel depth and unit normal vector of at least one pixel point in the same pixel set. The unit normal vector and the pixel depth of each pixel are simultaneously used for estimating the plane parameter of the plane, so that the estimation of the plane parameter can take the contents of the direction and the distance into account, and the estimation of the plane parameter is more accurate. And the distance from the pixel point represented by the pixel depth to the object point corresponding to the pixel point to the camera plane corresponding to the pixel point is increased to the confirmation process of the plane parameter, so that the accuracy of plane estimation can be improved, and more accurate plane parameters can be obtained.
If a plane exists in the image, a certain correlation exists between the pixel point and the plane in the image. For example, the distances between pixels belonging to the same plane are small, and the distances between pixels belonging to different planes are large. In order to solve the problem of classification of a plurality of pixel points of the image to be processed, the classification can be performed according to plane association parameters among the plurality of pixel points. As an embodiment, the dividing the pixel points of the to-be-processed image, where the plurality of pixel points are located on the same plane, into the same pixel set may include:
determining plane association parameters of a plurality of pixel points of the image to be processed between planes of the pixel points respectively;
and dividing the pixel points belonging to the same plane in the plurality of pixel points into the same pixel set according to the plane association parameters respectively corresponding to the plurality of pixel points.
As a possible implementation manner, dividing, according to the plane association parameters respectively corresponding to the plurality of pixel points, the pixel points belonging to the same plane among the plurality of pixel points into the same pixel set may include: the method comprises the steps of dividing a pixel point belonging to the same plane in a plurality of pixel points into the same pixel set according to plane association parameters respectively corresponding to the plurality of pixel points, and dividing the plane association distance between any two pixel points in the plurality of pixel points according to the plane association distance between any two pixel points in the plurality of pixel points.
Optionally, the plane association distance between any two pixel points may be calculated by using the respective plane association parameters of the two pixel points. The method for dividing the pixel points belonging to the same plane in the plurality of pixel points into the same pixel set by using the plane association distance between any two pixel points in the plurality of pixel points can comprise the following steps: and considering that the plane association distance meets a certain distance condition to belong to the same plane so as to obtain pixel points belonging to the same plane.
In one possible design, the plane association parameters may include: plane distribution probability and plane embedding vector. The plane distribution probability may indicate whether a pixel belongs to a point on a plane or a point on a non-plane. The plane embedding vector can refer to encoded data generated by a pixel point relative to a plane, and the association relationship between the pixel point and the plane where the pixel point is located can be represented by the plane embedding vector. Thus, the steps: determining plane association parameters between a plurality of pixel points of the image to be processed and a plane where the pixel points are located may include:
and determining the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane of the pixel points.
At this time, according to the plane association parameters respectively corresponding to the plurality of pixel points, dividing the pixel points located on the same plane among the plurality of pixel points into the same pixel set may include:
and dividing the pixel points positioned on the same plane in the plurality of pixel points into the same pixel set according to the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane on which the pixel points are positioned.
The plane distribution probability and the plane embedded vector of one pixel point can represent whether the pixel point is on one plane or not, and the embedded vector between the pixel point and the plane is positioned, so that the pixel point can be accurately judged through the plane distribution probability and the plane embedded vector of the pixel point, and the estimation efficiency of the pixel point is improved.
The pixel points on the same pixel plane have certain relevance. The relevance of any two pixel points can be determined through the distribution condition of the two pixel points on the plane and the embedding condition of the two pixel points on the plane. In particular, it can be determined by the probability of the distribution of the pixel on its plane and the embedding vector.
As shown in fig. 4, a flowchart of another embodiment of an image processing method provided in the embodiment of the present application may include the following steps:
401: and extracting the image characteristics of the image to be processed.
In some embodiments, extracting image features of the image to be processed may include:
and extracting the image features of the image to be processed based on a feature extraction algorithm.
The image features of the image to be processed can be extracted by adopting an image feature extraction algorithm. The feature extraction algorithm used may include a SIFT (Scale-invariant feature transform) algorithm, a PCA (Principal Component Analysis) algorithm, an HOG (histogram of Oriented gradients) algorithm, a ResNet-101-FPN (systematic Network-101-featurepyramin nodes, depth Residual-101-feature pyramid Network) algorithm, and the like.
402: inputting the image characteristics of the image to be processed into the machine learning model so as to obtain the plane distribution probability, the plane embedding vector, the pixel depth and the unit normal vector of the pixel plane corresponding to the plurality of pixel points of the image to be processed on the plane where the plurality of pixel points are respectively located.
The plane distribution probability of any one pixel point may refer to whether the pixel is a point on a plane. When a pixel point belongs to a point on a plane, the plane distribution probability corresponding to the pixel point is 1, and when a pixel point does not belong to a point on the plane, the plane distribution probability corresponding to the pixel point is 0.
The plane embedding vector corresponding to any pixel point can represent the mapping relation between the pixel point and the plane where the pixel point is located in the space, and is obtained through calculation of a machine learning model.
The machine learning model used may be trained. The image to be processed is input into the trained machine learning model, so that the plane distribution probability, the plane embedding vector and the pixel depth of a plurality of pixel points of the image to be processed on the plane of the pixel points and the unit normal vector of the pixel plane corresponding to the pixel points can be quickly obtained.
403: and dividing the pixel points positioned on the same plane in the plurality of pixel points into the same pixel set according to the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane on which the pixel points are positioned.
404: and determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set according to the pixel depth and the unit normal vector respectively corresponding to the plurality of pixel points.
405: and determining plane parameters of a plane corresponding to the pixel set according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any pixel set.
406: and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to at least one pixel set.
Some steps of the embodiment of the present application are the same as those of the embodiment shown in fig. 1, and are not described herein again.
In the embodiment of the application, after the image features of the image to be processed are extracted, the machine learning model can be used for calculating the plane distribution probability, the plane embedding vector, the pixel depth and the unit normal vector of the pixel plane corresponding to the plurality of pixel points of the image to be processed on the plane where the plurality of pixel points are located respectively, so that the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane where the plurality of pixel points are located respectively can be utilized, and the pixel points located on the same plane in the plurality of pixel points are divided into the same pixel set. The relevance characteristics between the multiple pixel points and the plane where the multiple pixel points are located, namely the plane distribution probability and the plane embedding vector, are obtained by using the machine learning model obtained through training, and the multiple pixel points belonging to the same plane can be divided into the same pixel set through the determination of the two data, so that the classification accuracy is improved, and the estimation of plane parameters of the plane can be improved.
As an example, the machine learning model used in the embodiment shown in fig. 4 is obtained by training in the following way:
determining at least one training image; the multiple pixel points of any training image correspond to standard pixel depth, and standard plane identification and standard plane parameters of a standard plane where the multiple pixel points are respectively located.
And determining the standard distribution probability of the pixel point according to the standard plane identification of the standard plane in which any pixel point of any training image is positioned.
And constructing a machine learning model.
And training to obtain model parameters of the machine learning model by using the standard distribution probability and the standard pixel depth which correspond to the plurality of pixel points of the at least one training image and any one training image respectively, and the standard plane identification and the standard plane parameters of the standard plane in which the plurality of pixel points are respectively located.
The at least one training image may be used to train model parameters of the machine learning model. The standard pixel depth of at least one training image, and the standard plane identifier and standard plane parameters of the standard plane in which the plurality of pixel points are respectively located can be obtained by using measurement, setting or acquisition and other methods.
In the embodiment of the application, by determining at least one training image, the training image is corresponding to the standard pixel depth, and the standard plane identifier and the standard plane parameter of the standard plane where a plurality of pixel points of the training image are respectively located, model parameters of a machine learning model are obtained through training, and accurate model parameters can be obtained.
As a possible implementation manner, the training to obtain the model parameters of the machine learning model by using the standard distribution probability and the standard pixel depth corresponding to each of the plurality of pixel points of the at least one training image and any one of the training images, and the standard plane identifier and the standard plane parameters of the standard plane in which the plurality of pixel points are respectively located may include:
determining reference model parameters of the machine learning model;
and inputting any training image into the machine learning model corresponding to the reference model parameter, and calculating to obtain the estimation distribution probability, the estimation embedding vector, the estimation pixel depth and the estimation plane parameter corresponding to a plurality of pixel points of the training image respectively so as to obtain the estimation distribution probability, the estimation embedding vector, the estimation pixel depth and the estimation plane parameter corresponding to a plurality of pixel points of the at least one training image respectively.
And determining the standard embedded vectors respectively corresponding to the plurality of standard planes of the training images aiming at the estimation embedded vectors respectively corresponding to the plurality of pixel points of any training image and the standard plane identification of the standard plane where the plurality of pixel points are respectively located so as to obtain the standard embedded vectors respectively corresponding to the plurality of standard planes of at least one training image.
And determining the estimation error of the at least one training image based on the estimation distribution probability and the standard distribution probability of a plurality of pixel points corresponding to the at least one training image, the estimation embedding vector, the estimation pixel depth and the standard pixel depth, and the standard embedding vector, the estimation plane parameter and the standard plane parameter corresponding to a plurality of standard planes of the at least one training image.
If the estimation error meets a loss condition, determining a reference model parameter of the machine learning model as a model parameter of the machine learning model;
and if the estimation error does not meet the loss condition, adjusting the reference model parameters of the machine learning model based on the estimation error, and returning to the step of determining the parameter model parameters of the machine learning model to continue execution.
In the process of training the model, reference model parameters of the model need to be determined first, and after the reference model parameters are substituted into the machine learning model, the corresponding reference model parameters can be used for training the model parameters of the machine learning model.
Model parameters in the prediction process, the determination of the loss function directly affects the accuracy of the model. Loss estimation is carried out on four parameters such as distribution probability, embedded vectors, pixel depth and plane parameters respectively, so that when a loss function meets a certain loss condition, the reference model parameters used at this time are obtained and used as model parameters of the machine learning model. Therefore, further optionally, determining an estimation error of at least one training image based on the estimated distribution probability and the standard distribution probability of the plurality of pixel points corresponding to the at least one training image, the estimated embedding vector, the estimated pixel depth and the standard pixel depth, and the standard embedding vector, the estimated plane parameter and the standard plane parameter corresponding to the plurality of standard planes of the at least one training image, respectively, includes:
and aiming at any training image, determining the balance error of the training image according to the respective estimation distribution probability and standard distribution probability of a plurality of pixel points of the training image.
And determining the embedding error of the training image according to the respective estimated embedding vectors of the plurality of pixel points of the training image and the respective standard embedding vectors of the plurality of standard planes of the training image. And determining the depth error of the training image according to the respective estimated pixel depth and standard pixel depth of a plurality of pixel points of the training image.
And determining the parameter error of the training image according to the respective estimated plane parameters and standard plane parameters of a plurality of pixel points of the training image.
And taking the sum of the balance error, the embedding error, the depth error and the parameter error of the training image as the training error of the training image.
And calculating the sum of training errors respectively corresponding to at least one training image to obtain the estimation error of the at least one training image.
As a possible implementation manner, determining an embedding error of a training image according to an estimated embedding vector of each of a plurality of pixel points of the training image and a standard embedding vector of each of a plurality of standard planes of the training image includes:
and taking the standard embedded vectors of a plurality of pixel points of the training image respectively corresponding to the standard plane as the standard embedded vectors respectively corresponding to the pixel points.
And determining a first vector error between the plurality of pixel points of the training image and the plane where the pixel points are located according to the respective estimated embedded vectors and the standard embedded vector of the plurality of pixel points of the training image.
Determining a second vector error between any two planes of the training image according to respective standard embedded vectors of a plurality of standard planes of the training image.
An embedding error of the training image is obtained based on a difference between the first vector error and the second vector error.
The first vector error between a plurality of pixel points of the training image and the plane where the pixel points are located can be determined according to the vector distance between the estimated embedding direction corresponding to each pixel point and the standard embedding vector. The second vector error between any two planes of the training image may be determined based on the vector distance between the standard embedding vectors corresponding to the two planes, respectively.
The embedding error of the training image can constrain the error between the point and the plane of the training image and the error between the plane and the plane, the constraint targets are that the error between the point and the plane is minimum and the error between the plane and the plane is maximum, therefore, the values of the first vector error and the second vector error in the calculation process are opposite in direction and positive and negative, that is, the embedding error can include the difference between the first vector error and the second vector error, and when the embedding error of at least one training image is calculated, the sum of the absolute values of the embedding errors respectively corresponding to at least one training image can be used as the embedding error of at least one training image.
The depth error between the estimated pixel depth and the standard pixel depth for a plurality of pixel points of the training image may include: and obtaining a depth error according to the sum of the depth difference values between the pixel depths respectively corresponding to the plurality of pixel points and the standard pixel depth.
The parameter error corresponding to the estimated plane parameter and the standard plane parameter of each of the plurality of pixel points of the training image may include: and obtaining a parameter error according to the sum of the vector distance differences between the estimated plane parameters and the standard plane parameters corresponding to the plurality of pixel points respectively.
In some embodiments, determining, for the estimated embedding vectors corresponding to the plurality of pixel points of any training image respectively and the standard plane identifiers of the standard planes in which the plurality of pixel points are located, the standard embedding vectors corresponding to the plurality of standard planes of the training image respectively, so as to obtain the standard embedding vectors corresponding to the plurality of standard planes of the at least one training image respectively may include:
aiming at any training image, dividing pixel points with the same standard plane identification in a plurality of pixel points of the training image into the same pixel set to obtain pixel sets corresponding to a plurality of standard planes of the training image respectively;
and respectively calculating the vector mean value of the estimated embedded vector of at least one pixel point in the pixel set respectively corresponding to the plurality of standard planes to obtain the standard embedded vectors corresponding to the distribution of the plurality of standard planes.
In some embodiments, determining the standard distribution probability of any pixel point according to the standard plane identifier of the standard plane where the pixel point is located may include:
if the standard plane mark of the standard plane where any pixel point is located meets the plane distribution condition, determining the standard distribution probability of the pixel point as a first distribution probability;
and if the standard plane mark of the standard plane where any pixel point is located does not meet the plane distribution condition, determining the standard distribution probability of the pixel point as a second distribution probability.
A plurality of standard planes can be included in one training image, each standard plane can be provided with a corresponding standard plane identifier, the standard plane identifier can be used for uniquely identifying the standard plane, and the standard plane identifiers of different standard planes are different. The pixel points belonging to the same standard plane can be associated with the standard plane identifier of the plane, so as to determine that the pixel points belong to the plane corresponding to the standard plane identifier corresponding to the pixel points. For example, one training image includes two standard planes, which may be identified as M1 and M2, at this time, the standard plane identifiers of the pixels in the standard plane M1 are both M1, and the standard plane identifiers of the pixels in the standard plane M2 are both M2. The training image may further include pixel points that are not in the standard plane M1 or the standard plane M2, and these pixel points may be determined as points in a non-plane, and the standard plane identifier of these pixel points may be set to M0.
Wherein the first distribution probability and the second distribution probability represent opposite plane distribution conditions. The first distribution probability represents that a certain pixel point is a point on a plane, and the second distribution probability represents that a certain pixel point is a point on a non-plane.
The standard plane identifier of the standard plane where the pixel points are located meeting the plane distribution condition may include: the standard plane identification of the standard plane where the pixel point is located belongs to a plane identification type, but not belongs to a non-plane identification type. When the standard plane identifier of the standard plane where a pixel is located meets the plane distribution condition, it indicates that the pixel belongs to a point on the plane, and the standard distribution probability of the pixel may be set to be the first distribution probability, for example, the standard distribution probability of the pixel may be set to be 1.
The standard plane identifier of the standard plane where the pixel points are located not meeting the plane distribution condition may include: the standard plane identification of the standard plane where the pixel point is located belongs to a non-plane identification type, but not belongs to a plane identification type. When the standard plane identifier of the standard plane where a pixel is located does not satisfy the plane distribution condition, it indicates that the pixel does not belong to electricity on the plane, and the standard distribution probability of the pixel may be set to the second distribution probability, for example, the standard distribution probability of the pixel may be set to 0.
Taking the above example including the standard plane M1 and the standard plane M2 as an example, the standard distribution probability of the pixels identified by the standard plane M1 and the standard plane M2 is the first distribution probability, and the standard distribution probability of the pixels identified by the standard plane M0 is the second distribution probability.
As another embodiment, the dividing, according to the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane where the plurality of pixel points are located, the pixel points located on the same plane among the plurality of pixel points into the same pixel set may include:
and respectively inputting the plane distribution probability and the plane embedded vector of the plurality of pixel points on the plane where the plurality of pixel points are located into a mean value clustering algorithm, and dividing the pixel points belonging to the same plane into the same pixel set.
The mean clustering algorithm designed in the embodiment of the present application may include a K-means clustering algorithm. When the K-means clustering algorithm is used to partition pixels belonging to the same pixel set, as a possible implementation manner, the dividing the pixels belonging to the same plane into the same pixel set according to the plane distribution probability and the plane embedded vector of the plurality of pixels on the plane where the pixels are respectively located, includes:
dividing the plurality of pixel points into at least one clustering center point and a plurality of pixel points to be classified except the at least one clustering center point;
for any clustering center point, determining vector distances between the candidate pixel points and the clustering center point according to the plane distribution probability and the plane embedding vector of the candidate pixel points on the plane where the candidate pixel points are located and the plane distribution probability and the plane embedding vector of the clustering center point on the plane where the candidate pixel points are located;
dividing the candidate pixel points into pixel sets represented by the clustering center points with the highest plane correlation parameter according to the vector distance between the candidate pixel points and each clustering center point respectively to obtain at least one pixel set; each pixel set comprises a plurality of pixel points confirmed to belong to the same plane;
obtaining a central pixel point of at least one candidate pixel point in the at least one pixel set;
if at least one central pixel point and at least one clustering central point meet the convergence condition, determining at least one pixel set formed by dividing the same pixel point into the same pixel set in the plurality of pixel points;
if the at least one central pixel point and the at least one clustering center point do not meet the convergence condition, taking the at least one central pixel point as a new at least one clustering center point, and returning to the step of dividing the plurality of pixel points into the at least one clustering center point and a plurality of pixel points to be classified except the at least one clustering center point to continue to be executed.
K in the K-means clustering algorithm represents the number of at least one set clustering center point, and K clustering center points are set, namely a plurality of pixel points of the image to be processed can be divided into K pixel sets. One cluster center point corresponds to one set of pixels. When at least one cluster center point is set, K pixel points can be randomly selected from a plurality of pixel points of the image to be processed to serve as the at least one cluster center.
In order to enable at least one pixel set obtained after a plurality of pixel points of an image to be processed are divided into pixel sets to be more matched with the number of planes actually existing in the image, the clustering efficiency is improved. In one possible design, the number of the at least one cluster center point may be determined by:
dividing the image to be processed according to a plurality of spatial grids to obtain a plurality of regional images;
extracting the area characteristics corresponding to the area images respectively;
dividing a plurality of regional images with regional feature similarity meeting similarity dividing conditions into the same regional set to obtain at least one regional set;
determining the number of the at least one region set as the number of the at least one cluster center point.
Further, optionally, the extracting the region features corresponding to the plurality of region images respectively may include:
respectively calculating the characteristic mean value of at least one pixel point of each of the plurality of regional images;
the dividing the region images of which the similarity of the region features meets the similarity dividing condition into the same region set to obtain at least one region set comprises:
calculating the characteristic distance between the respective characteristic mean values of any two area images in the plurality of area images;
dividing any two area images with the characteristic distance smaller than a distance threshold value into the same area set, and obtaining area sets corresponding to the area images respectively so as to obtain at least one area set.
As a possible implementation manner, the calculating the feature mean of at least one pixel point of each of the plurality of region images respectively may include:
aiming at least one pixel point corresponding to any regional image, calculating the average embedded vector of the estimated embedded vector corresponding to the at least one pixel point respectively to obtain the average embedded vector corresponding to the regional image;
and respectively taking the obtained average embedding vector corresponding to the at least one region image as the characteristic mean value of the at least one region image.
The average embedding vector obtained by calculating the estimated embedding vector corresponding to at least one pixel point in any region image can be used as the feature average corresponding to the region image. As another possible implementation manner, the calculating the feature mean of at least one pixel point of each of the plurality of region images respectively may include:
aiming at least one pixel point corresponding to any regional image, calculating the average gray value of the gray value corresponding to the distribution of the at least one pixel point, and obtaining the average gray value corresponding to the regional image;
and respectively taking the average gray value corresponding to the obtained at least one region image as the characteristic mean value of the at least one region image.
The average gray value obtained by calculating the gray value corresponding to at least one pixel point in any region image can be used as the characteristic mean value corresponding to the region image.
As shown in fig. 5, a flowchart of another embodiment of an image processing method provided in the embodiment of the present application may include the following steps:
501: and dividing pixel points which are positioned on the same plane in a plurality of pixel points of the image to be processed into the same pixel set.
502: and determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set.
503: and determining pixel plane parameters corresponding to at least one pixel point in any pixel set according to the pixel depth and the unit normal vector corresponding to the at least one pixel point.
504: and carrying out weighted summation on the pixel plane parameters respectively corresponding to the at least one pixel point to obtain the plane parameters of the plane corresponding to the pixel set.
505: and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
In the embodiment of the application, the pixel points of the image to be processed, which are located on the same plane, are divided into the same pixel set to determine the pixel depth and the unit normal vector corresponding to at least one pixel point included in the same pixel set, so that the pixel depth and the unit normal vector corresponding to at least one pixel point in any one pixel set can be determined. When the plane parameters of the plane corresponding to any pixel set are determined, the pixel plane parameters corresponding to at least one pixel point are used for weighted summation, all data of at least one pixel point in one plane are integrated, and the pixel plane parameters are more accurate.
As another embodiment, the determining, according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any one pixel set, the pixel plane parameter respectively corresponding to the at least one pixel point may include:
and obtaining the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point according to the pixel depth and the unit normal vector corresponding to any pixel point in any pixel set.
And determining a unit normal vector corresponding to the pixel point and a pixel plane distance as pixel plane parameters of the pixel point in a camera coordinate system to obtain pixel plane parameters of at least one pixel point in the pixel set in the camera coordinate system respectively.
As a possible implementation manner, the obtaining, according to the pixel depth and the unit normal vector corresponding to any one pixel point in any one pixel set, the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point may include:
determining an image coordinate point of any pixel point in any pixel set in an image coordinate system;
acquiring camera internal parameters of a camera for acquiring the image to be processed;
determining a camera coordinate point of the pixel point in the camera coordinate system according to the pixel depth and the image coordinate point corresponding to the pixel point and by combining the camera internal parameters;
and inputting the camera coordinate point corresponding to the pixel point and the unit normal vector into a plane distance calculation formula to obtain the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point.
Wherein, the plane distance calculation formula may include:
d=NT*P
and P is a camera coordinate point of the pixel point in a camera coordinate system and is obtained through pixel depth calculation. N is a unit normal vector.
P can be according to pixel depth and in-cameraDetermining: p ═ Z × K-1[u,v,1]。
Wherein Z is the pixel depth. K is camera internal reference. [ u, v ]]TThe image coordinate points of the pixel points in the image coordinate system of the image to be processed are obtained.
As a possible design, the performing weighted summation on the pixel plane parameters corresponding to the at least one pixel point to obtain the plane parameters of the plane corresponding to the pixel set may include:
and carrying out weighted summation on the unit normal vectors respectively corresponding to the at least one pixel point to obtain an average normal vector of the at least one pixel point.
And carrying out weighted summation on the pixel plane distances corresponding to the at least one pixel point respectively to obtain the average plane distance corresponding to the at least one pixel point.
And determining the average normal vector and the average plane distance as plane parameters of the plane corresponding to the pixel set.
In a possible design, the performing weighted summation on the planar distances of the pixels corresponding to at least one pixel point respectively to obtain the average planar distance corresponding to the at least one pixel point may include:
obtaining the evaluation weight of at least one pixel point on the plane where the pixel point is located;
and carrying out weighted summation on the evaluation weight and the pixel plane distance corresponding to at least one pixel point respectively to obtain the average plane distance corresponding to at least one pixel point respectively.
The average plane distance is the distance from the origin of coordinates of the camera coordinate system to the plane corresponding to all the pixels in the pixel set, that is, the distance from the origin of coordinates of the camera coordinate system to the plane corresponding to the pixel set. Any pixel point in the pixel set can satisfy a plane equation formed by plane parameters of a plane corresponding to the pixel set. The average plane distance corresponds to the equation intercept D in the plane equation Ax + By + Cz + D ═ 0.
In one possible design, the unit normal vector corresponding to any one pixel point may include: a first value, a second value, and a third value;
the performing weighted summation on the unit normal vectors corresponding to the at least one pixel point respectively to obtain an average normal vector of the at least one pixel point may include:
obtaining the evaluation weight of the at least one pixel point on the plane where the pixel point is located;
weighting and summing the first numerical value of the unit normal vector of at least one pixel point and the corresponding evaluation weight thereof to obtain a first target value;
carrying out weighted summation on the second numerical value of the unit normal vector of the at least one pixel point and the corresponding evaluation weight thereof to obtain a second target value;
weighting and summing a third numerical value of the unit normal vector of the at least one pixel point and the corresponding evaluation weight thereof to obtain a third target value;
and determining an average normal vector formed by the first target value, the second target value and the third target value.
The average normal vector and the average plane distance are plane parameters of a plane where at least one pixel point in the pixel set is located, and a plane equation corresponding to the plane can be determined according to the average normal vector and the average plane distance.
Optionally, the first numerical value may be a coefficient of a pixel point in the camera coordinate system corresponding to the X term in the X axis of the pixel plane, the second numerical value may be a coefficient of a pixel point in the camera coordinate system corresponding to the Y term in the Y axis of the pixel plane, and the third numerical value may be a coefficient of a pixel point in the camera coordinate system corresponding to the Z term in the Z axis of the pixel plane.
For convenience of understanding, taking two pixel points of a plane in an image as an example, the normal vectors in the plane parameters of the two pixel points are (a1, b1, c1), (a2, b2, c2), the weights are w1 and w2, and the average normal vector of the plane parameters of the plane obtained by calculation is:
((w1*a1+w2*a2)/(w1+w2),(w1*b1+w2*b2)/(w1+w2),(w1*c1+w2*c2)/(w1+w2))。
wherein (w1 a1+ w 2a 2)/(w1+ w2) is a first target value, (w1 b1+ w 2b 2)/(w1+ w2) is a second target value, and (w1 c1+ w2 c2)/(w1+ w2) is a third target value.
The first target value is a coefficient of an X item corresponding to an X axis of a plane where at least one pixel point in the pixel set is located, the second target value is a coefficient of a Y item corresponding to a Y axis of a plane where at least one pixel point in the pixel set is located, and the third target value is a coefficient of a Z item corresponding to a Z axis of a plane where at least one pixel point in the pixel set is located.
As shown in fig. 6, a flowchart of an embodiment of an intelligent control method provided in an embodiment of the present application may include:
601: and dividing pixel points which are positioned on the same plane in a plurality of pixel points of the image to be processed into the same pixel set.
602: and determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set.
603: and determining plane parameters of a plane corresponding to the pixel set according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any pixel set.
604: and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
605: and intelligently controlling the electronic equipment for acquiring the image to be processed by utilizing the plane parameters respectively corresponding to at least one plane in the image to be processed.
As a possible implementation manner, the intelligently controlling the electronic device for acquiring the image to be processed by using the plane parameters respectively corresponding to at least one plane in the image to be processed may include: and generating a first control instruction of the electronic equipment of the image to be processed according to the plane parameters respectively corresponding to at least one plane in the image to be processed, and executing the first control instruction so as to perform first intelligent control on the electronic equipment.
As another possible implementation manner, the intelligently controlling the electronic device that acquires the image to be processed by using the plane parameters respectively corresponding to at least one plane in the image to be processed may include: and generating a second control instruction of the electronic equipment of the image to be processed according to the plane parameters corresponding to at least one plane in the image to be processed, sending the second control instruction to the electronic equipment so that the electronic equipment can receive the second control instruction sent by the server, and executing the second control instruction so as to perform second intelligent control on the electronic equipment.
In the embodiment of the application, the plane parameters of at least one plane in the image to be processed are obtained through the analysis processing work of the image to be processed, so that the electronic equipment for acquiring the image can be intelligently controlled by utilizing the plane parameters respectively corresponding to the at least one plane in the obtained image to be processed. Due to the fact that the obtaining process of the plane parameters of at least one plane in the image to be processed is optimized, the plane parameters of each plane are more accurate, and the accuracy of intelligent control over the electronic equipment is higher.
The technical scheme of this application embodiment, carry out three-dimensional reconstruction to the pending image that is in two-dimentional image coordinate system, obtain the respective plane parameter of at least one plane that contains in the pending image, the three-dimensional plane parameter of camera coordinate system basis is used to arbitrary planar plane parameter, the three-dimensional plane parameter's of pending image application is very extensive, in fields such as comparatively common autopilot field, intelligent robot, wearable equipment or unmanned aerial vehicle, except that directly utilizing the respective plane parameter of at least one plane that three-dimensional reconstruction obtained to drive control, can also combine sensors such as laser radar, millimeter wave radar to jointly mark in order to carry out space synchronization, thereby realize the accurate intelligent control to electronic equipment such as the on-vehicle equipment of autopilot, intelligent robot, wearable equipment or unmanned aerial vehicle.
As a possible implementation manner, when the electronic device that acquires the image to be processed is intelligently controlled by using the plane parameters corresponding to at least one plane in the image to be processed, the sensing data acquired by the sensors such as the laser radar can be acquired, the pose of the electronic device can be analyzed by using the sensing data, and whether the pose is accurate or not can be determined by comparing the pose with the plane parameters of at least one plane. The pose of the electronic device may include data such as a rotation angle, a rotation direction, and a reference point of the sensor with respect to the reference plane. The reference plane and the at least one plane obtained by the reconstruction of the image to be processed may be in the same coordinate system. For example, when the plane parameters of at least one plane obtained by three-dimensional reconstruction of the image to be processed are based on the camera coordinate system, the reference plane parameters of the reference plane may also be plane parameters belonging to the camera coordinate system.
For convenience of understanding, the application of the technical solution of the embodiment of the present application is described in detail by taking an electronic device as an example of an autonomous vehicle. As shown in fig. 7a, the autonomous vehicle can capture an image of the front of the autonomous vehicle to obtain an image to be processed 701. When the automatic driving vehicle M0 sweeps the floor in the indoor space, the indoor space in the shooting area of the automatic driving vehicle M0 can be shot to obtain the image to be processed 701, and the automatic driving vehicle can divide the pixel points located on the same plane in the plurality of pixel points of the image to be processed into the same pixel set to realize the division of the spatial plane. Plane 702, plane 703, plane 704, etc. in fig. 7b are all planes formed by at least one pixel corresponding to the obtained pixel set. Therefore, the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in the same pixel set can be obtained; and determining plane parameters of a plane corresponding to the pixel set by using the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any pixel set. And the pixel sets correspond to planes, and the obtained plane parameters of at least one pixel set are the plane parameters respectively corresponding to at least one plane in the image to be processed. When the automatic driving automobile is intelligently controlled, based on the automatic control application scene of the automatic driving, the plane where the automatic driving automobile is located is determined from the image 701 to be processed, and the moving path of the automatic driving automobile is determined according to the plane parameters of the plane, so that the moving efficiency is effectively improved, and the phenomena of collision with an object in an indoor space and the like are avoided.
In the example shown in fig. 7b, the autonomous vehicle can independently complete the entire intelligent control. In some embodiments, the autonomous vehicle may further interact with a server, as shown in fig. 8, an on-board device (not shown) in the autonomous vehicle 801 may transmit the acquired to-be-processed image to the server 802, the server 802 performs operations such as image processing and path planning, the server 801 sends a control instruction to the on-board device in the autonomous vehicle 801, and the on-board device controls the autonomous vehicle 801 to move according to the path planned by the server 802.
As shown in fig. 9, a schematic structural diagram of an embodiment of an image processing apparatus provided in an embodiment of the present application, the apparatus may include: a storage component 901 and a processing component 902; the storage component 901 is configured to store one or more computer instructions for being invoked by the processing component 902;
the processing component 902 may be configured to:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining a plane calculation parameter unit normal vector corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; and obtaining the plane parameters of at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
In the embodiment of the application, the processing group component of the image processing device clusters the pixels belonging to the same plane in the plurality of pixels of the image to be processed, so as to divide the pixels of the same plane into a pixel set. And determining plane parameters of a plane corresponding to the pixel set by respectively corresponding pixel depth and unit normal vector of at least one pixel point in the same pixel set. The unit normal vector and the pixel depth of each pixel are simultaneously used for estimating the plane parameter of the plane, so that the estimation of the plane parameter can take the contents of the direction and the distance into account, and the estimation of the plane parameter is more accurate. And the distance from the pixel point represented by the pixel depth to the object point corresponding to the pixel point to the camera plane corresponding to the pixel point is increased to the confirmation process of the plane parameter, so that the accuracy of plane estimation can be improved, and more accurate plane parameters can be obtained.
As an embodiment, the dividing, by the processing component, the pixel points located on the same plane among the plurality of pixel points of the image to be processed into the same pixel set may specifically be:
determining plane association parameters between a plurality of pixel points of the image to be processed and a plane where the pixel points are located; and dividing the pixels positioned on the same plane in the plurality of pixels into the same pixel set according to the plane association parameters respectively corresponding to the plurality of pixels.
In some embodiments, the determining, by the processing component, plane association parameters between the plurality of pixel points of the image to be processed and the plane in which the pixel points are located may specifically be:
and determining the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane where the pixel points are located.
The dividing, by the processing component, the pixels located in the same plane among the plurality of pixels into the same pixel set according to the plane association parameters respectively corresponding to the plurality of pixels may specifically be:
and dividing the pixel points positioned on the same plane in the plurality of pixel points into the same pixel set according to the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane on which the pixel points are positioned.
In some embodiments, the determining, by the processing component, the plane calculation parameters respectively corresponding to at least one pixel point included in the same pixel set may specifically include:
determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set;
the determining, by the processing component, the plane parameter of the plane corresponding to the pixel set according to the plane calculation parameter respectively corresponding to at least one pixel point in any one pixel set may specifically include:
and determining plane parameters of a plane corresponding to the pixel set according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any pixel set.
As an embodiment, the determining, by the processing component, the plane distribution probability and the plane embedding vector of each of the plurality of pixel points on the plane where the pixel point is located may specifically be:
extracting image features of the image to be processed;
inputting the image characteristics of the image to be processed into a machine learning model so as to obtain the plane distribution probability, the plane embedding vector, the pixel depth and the unit normal vector of the pixel plane corresponding to the pixel points of the image to be processed on the plane where the pixel points are respectively located;
the determining, by the processing component, the pixel depth and the unit normal vector respectively corresponding to at least one pixel point included in the same pixel set may specifically include:
and determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set according to the pixel depth and the unit normal vector respectively corresponding to the plurality of pixel points.
In some embodiments, the processing component may train to obtain the machine learning model by:
determining at least one training image; the method comprises the following steps that a plurality of pixel points of any training image respectively correspond to standard distribution probability, standard embedded vectors, standard pixel depths, and standard plane identifications and standard plane parameters of a standard plane where the plurality of pixel points are respectively located;
determining the standard distribution probability of the pixel points according to the standard plane identification of the standard plane in which any pixel point of any training image is positioned;
constructing a machine learning model;
and training to obtain model parameters of the machine learning model by using the standard distribution probability, the standard embedding vector, the standard pixel depth and the standard plane identification and standard plane parameters of the standard plane where the plurality of pixel points are respectively located, which are respectively corresponding to the plurality of pixel points of the at least one training image and any one training image.
In a possible design, the processing component utilizes the standard distribution probability and the standard pixel depth corresponding to each of the at least one training image and the plurality of pixel points of any one training image, and the standard plane identifier and the standard plane parameter of the standard plane where each of the plurality of pixel points is located, and the model parameter for training to obtain the machine learning model may specifically be:
determining reference model parameters of the machine learning model;
inputting any training image into the machine learning model corresponding to the reference model parameter, and calculating to obtain an estimated distribution probability, an estimated embedding vector, an estimated pixel depth and an estimated plane parameter corresponding to a plurality of pixel points of the training image respectively so as to obtain an estimated distribution probability, an estimated embedding vector, an estimated pixel depth and an estimated plane parameter corresponding to a plurality of pixel points of the at least one training image respectively;
determining standard embedded vectors respectively corresponding to a plurality of standard planes of the training images aiming at estimation embedded vectors respectively corresponding to a plurality of pixel points of any training image and standard plane identifications of the standard planes respectively corresponding to the pixel points so as to obtain the standard embedded vectors respectively corresponding to the standard planes of at least one training image;
determining an estimation error of the at least one training image based on an estimated distribution probability and a standard distribution probability of a plurality of pixel points corresponding to the at least one training image, an estimated embedding vector and a standard embedding vector, an estimated pixel depth and a standard pixel depth, and a standard embedding vector, an estimated plane parameter and a standard plane parameter corresponding to a plurality of standard planes of the at least one training image;
if the estimation error meets a loss condition, determining a reference model parameter of the machine learning model as a model parameter of the machine learning model;
and if the estimation error does not meet the loss condition, adjusting the reference model parameters of the machine learning model based on the estimation error, and returning to the step of determining the parameter model parameters of the machine learning model to continue execution.
As a possible implementation manner, the determining, by the processing component, an estimation error of the at least one training image based on the estimated distribution probability and the standard distribution probability of the plurality of pixel points corresponding to the at least one training image, the estimated embedding vector, the estimated pixel depth and the standard pixel depth, and the standard embedding vector, the estimated plane parameter and the standard plane parameter corresponding to the plurality of standard planes of the at least one training image, may specifically include:
and aiming at any training image, determining the balance error of the training image according to the respective estimation distribution probability and standard distribution probability of a plurality of pixel points of the training image.
And determining the embedding error of the training image according to the respective estimated embedding vectors of the plurality of pixel points of the training image and the respective standard embedding vectors of the plurality of standard planes of the training image.
And determining the depth error of the training image according to the respective estimated pixel depth and standard pixel depth of a plurality of pixel points of the training image.
And determining the parameter error of the training image according to the respective estimated plane parameters and standard plane parameters of a plurality of pixel points of the training image.
And taking the sum of the balance error, the embedding error, the depth error and the parameter error of the training image as the training error of the training image.
And calculating the sum of training errors respectively corresponding to the at least one training image to obtain the estimation error of the at least one training image.
In order to calculate an accurate vector error, the determining, by the processing component, the embedding error of the training image according to the estimated embedding vector of each of the plurality of pixel points of the training image and the standard embedding vector of each of the plurality of standard planes of the training image may specifically be:
taking the standard embedded vectors of a plurality of pixel points of the training image respectively corresponding to a standard plane as the standard embedded vectors respectively corresponding to the pixel points;
determining a first vector error between a plurality of pixel points of the training image and a plane where the pixel points are located according to the respective estimated embedded vectors and standard embedded vectors of the plurality of pixel points of the training image;
determining a second vector error between any two planes of the training image according to the respective standard embedded vectors of the plurality of standard planes of the training image;
obtaining an embedding error of the training image based on a difference between the first vector error and the second vector error.
As a possible implementation manner, the determining, by the processing component, the standard embedding vectors respectively corresponding to the plurality of standard planes of the training image for the estimated embedding vectors respectively corresponding to the plurality of pixel points of any training image and the standard plane identifiers of the standard planes in which the plurality of pixel points are respectively located, so as to obtain the standard embedding vectors respectively corresponding to the plurality of standard planes of the at least one training image may specifically include:
aiming at any training image, dividing pixel points with the same standard plane identification in a plurality of pixel points of the training image into the same pixel set to obtain pixel sets corresponding to a plurality of standard planes of the training image respectively; and respectively calculating the vector mean value of the estimated embedded vector of at least one pixel point in the pixel set respectively corresponding to the plurality of standard planes to obtain the standard embedded vectors corresponding to the distribution of the plurality of standard planes.
In order to obtain an accurate distribution probability, the processing component determines, according to the standard plane identifier of the standard plane where any pixel of any training image is located, the standard distribution probability of the pixel specifically may be:
if the standard plane mark of the standard plane where any pixel point is located meets the plane distribution condition, determining the standard distribution probability of the pixel point as a first distribution probability;
and if the standard plane mark of the standard plane where any pixel point is located does not meet the plane distribution condition, determining the standard distribution probability of the pixel point as a second distribution probability.
In some embodiments, the extracting, by the processing component, the image feature of the image to be processed may specifically include:
and extracting the image features of the image to be processed based on a feature extraction algorithm.
In some embodiments, the dividing, by the processing component, the pixel points located in the same plane among the plurality of pixel points into the same pixel set according to the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane where the plurality of pixel points are located, may specifically include:
and respectively inputting the plane distribution probability and the plane embedded vector of the plurality of pixel points on the plane where the plurality of pixel points are located into a mean value clustering algorithm, and dividing the pixel points belonging to the same plane into the same pixel set.
The mean clustering algorithm designed in the embodiment of the present application may include a K-means clustering algorithm. When the K-means clustering algorithm is used to partition pixels belonging to the same pixel set, as a possible implementation manner, the processing component inputs the plane distribution probability and the plane embedding vector of the plurality of pixels on the plane where the plurality of pixels are located into the K-means clustering algorithm, and the partition of the pixels belonging to the same plane into the same pixel set may specifically include:
dividing the plurality of pixel points into at least one clustering center point and a plurality of pixel points to be classified except the at least one clustering center point;
for any clustering center point, determining vector distances between the candidate pixel points and the clustering center point according to the plane distribution probability and the plane embedding vector of the candidate pixel points on the plane where the candidate pixel points are located and the plane distribution probability and the plane embedding vector of the clustering center point on the plane where the candidate pixel points are located;
dividing the candidate pixel points into pixel sets represented by the clustering center points with the highest plane correlation parameter according to the vector distance between the candidate pixel points and each clustering center point respectively to obtain at least one pixel set; each pixel set comprises a plurality of pixel points confirmed to belong to the same plane;
obtaining a central pixel point of at least one candidate pixel point in the at least one pixel set;
if at least one central pixel point and at least one clustering central point meet the convergence condition, determining at least one pixel set formed by dividing the same pixel point into the same pixel set in the plurality of pixel points;
if the at least one central pixel point and the at least one clustering center point do not meet the convergence condition, taking the at least one central pixel point as a new at least one clustering center point, and returning to the step of dividing the plurality of pixel points into the at least one clustering center point and a plurality of pixel points to be classified except the at least one clustering center point to continue to be executed.
K in the K-means clustering algorithm represents the number of at least one set clustering center point, and K clustering center points are set, namely a plurality of pixel points of the image to be processed can be divided into K pixel sets. One cluster center point corresponds to one set of pixels. When at least one cluster center point is set, K pixel points can be randomly selected from a plurality of pixel points of the image to be processed to serve as the at least one cluster center.
In order to enable at least one pixel set obtained after a plurality of pixel points of an image to be processed are divided into pixel sets to be more matched with the number of planes actually existing in the image, the clustering efficiency is improved. In some embodiments, the processing component determines the number of at least one cluster center point by:
dividing the image to be processed according to a plurality of spatial grids to obtain a plurality of regional images;
extracting the area characteristics corresponding to the area images respectively;
dividing a plurality of regional images with regional feature similarity meeting similarity dividing conditions into the same regional set to obtain at least one regional set;
determining the number of the at least one region set as the number of the at least one cluster center point.
Further, optionally, the extracting, by the processing component, the region features respectively corresponding to the plurality of region images includes:
respectively calculating the characteristic mean value of at least one pixel point of each of the plurality of regional images;
the dividing, by the processing component, the region images of which the similarity of the region features satisfies the similarity dividing condition into the same region set to obtain at least one region set may specifically include:
calculating the characteristic distance between the respective gray level mean values of any two regional images in the plurality of regional images;
dividing any two area images with the characteristic distance smaller than a distance threshold value into the same area set, and obtaining area sets corresponding to the area images respectively so as to obtain at least one area set.
As a possible implementation manner, the calculating, by the processing component, the feature mean of at least one pixel point of each of the plurality of region images may specifically include:
aiming at least one pixel point corresponding to any regional image, calculating the average embedded vector of the estimated embedded vector corresponding to the at least one pixel point respectively to obtain the average embedded vector corresponding to the regional image; and respectively taking the obtained average embedding vector corresponding to the at least one region image as the characteristic mean value of the at least one region image.
As another embodiment, the determining, by the processing component, the plane parameter of the plane corresponding to the pixel set according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any one pixel set may specifically include:
determining pixel plane parameters corresponding to at least one pixel point in any pixel set according to the pixel depth and the unit normal vector corresponding to the at least one pixel point respectively;
and carrying out weighted summation on the pixel plane parameters respectively corresponding to the at least one pixel point to obtain the plane parameters of the plane corresponding to the pixel set.
As a possible implementation manner, the determining, by the processing component, the pixel plane parameter corresponding to each of at least one pixel point in any one pixel set according to the pixel depth and the unit normal vector corresponding to each of the at least one pixel point in any one pixel set may specifically include:
and obtaining the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point according to the pixel depth and the unit normal vector corresponding to any pixel point in any pixel set.
And determining the unit normal vector and the pixel plane distance corresponding to the pixel point as the pixel plane parameters of the pixel point in the camera coordinate system, so as to obtain the pixel plane parameters of at least one pixel point in the pixel set in the camera coordinate system respectively.
Further, optionally, the obtaining, by the processing component, a pixel plane distance from a coordinate origin of the camera coordinate system to a pixel plane corresponding to the pixel point according to the pixel depth and the unit normal vector corresponding to any pixel point in any pixel set may specifically include:
determining an image coordinate point of any pixel point in any pixel set in an image coordinate system;
acquiring camera internal parameters of a camera for acquiring the image to be processed;
determining a camera coordinate point of the pixel point in the camera coordinate system according to the pixel depth and the image coordinate point corresponding to the pixel point and by combining the camera internal parameters;
and inputting the camera coordinate point corresponding to the pixel point and the unit normal vector into a plane distance calculation formula to obtain the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point.
In a possible design, the weighted summation of the pixel plane parameters corresponding to the at least one pixel point by the processing component to obtain the plane parameter of the plane corresponding to the pixel set may specifically include:
carrying out weighted summation on unit normal vectors corresponding to the at least one pixel point respectively to obtain an average normal vector of the at least one pixel point;
carrying out weighted summation on the pixel plane distances corresponding to the at least one pixel point respectively to obtain an average plane distance corresponding to the at least one pixel point;
and determining the average normal vector and the average plane distance as plane parameters of the plane corresponding to the pixel set.
Further, optionally, the unit normal vector corresponding to any one of the pixel points includes: a first value, a second value, and a third value;
the weighting and summing, by the processing component, the unit normal vectors respectively corresponding to the at least one pixel point, and obtaining the average normal vector of the at least one pixel point may specifically include:
obtaining the evaluation weight of the at least one pixel point on the plane where the pixel point is located;
weighting and summing the first numerical value of the unit normal vector of at least one pixel point and the corresponding evaluation weight thereof to obtain a first target value;
carrying out weighted summation on the second numerical value of the unit normal vector of the at least one pixel point and the corresponding evaluation weight thereof to obtain a second target value;
weighting and summing a third numerical value of the unit normal vector of the at least one pixel point and the corresponding evaluation weight thereof to obtain a third target value;
and determining an average normal vector formed by the first target value, the second target value and the third target value.
The image processing device shown in fig. 9 may execute the image processing method described in the embodiments shown in fig. 1 and fig. 3 to fig. 5, and the implementation principle and the technical effect are not repeated. The specific manner in which the various steps are performed by the processing elements in the above-described embodiments has been described in detail in relation to embodiments of the method and will not be set forth in detail herein.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where the storage medium is used to store a computer program, and when the computer program is executed, the computer program may perform the image processing method in the foregoing embodiment.
As shown in fig. 10, a schematic structural diagram of an embodiment of an intelligent control device provided in the embodiment of the present application, where the intelligent control device may include: a storage component 1001 and a processing component 1002; the storage component 1001 is used to store one or more computer instructions; the one or more computer instructions are invoked by the processing component 1002;
the processing component 1002 is configured to:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set; and intelligently controlling the electronic equipment for acquiring the image to be processed by utilizing the plane parameters respectively corresponding to at least one plane in the image to be processed.
In the embodiment of the application, the plane parameters of at least one plane in the image to be processed are obtained through the analysis processing work of the image to be processed, so that the electronic equipment for acquiring the image can be intelligently controlled by utilizing the plane parameters respectively corresponding to the at least one plane in the obtained image to be processed. Due to the fact that the obtaining process of the plane parameters of at least one plane in the image to be processed is optimized, the plane parameters of each plane are more accurate, and the accuracy of intelligent control over the electronic equipment is higher.
The method for implementing intelligent control described in the embodiment shown in fig. 6 can be performed by the intelligent control device shown in fig. 10, and the implementation principle and the technical effect are not described again. The specific manner in which the various steps are performed by the processing elements in the above-described embodiments has been described in detail in relation to embodiments of the method and will not be set forth in detail herein.
In addition, an embodiment of the present application further provides a computer-readable storage medium, where the storage medium is used to store a computer program, and when the computer program is executed, the intelligent control method in the foregoing embodiment may be executed.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described aspects and portions of the present technology which contribute substantially or in part to the prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein, including without limitation disk storage, CD-ROM, optical storage, and the like.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (25)

1. An image processing method, comprising:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set;
determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set;
determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set;
and obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
2. The method according to claim 1, wherein the determining the plane calculation parameters corresponding to at least one pixel point included in the same pixel set respectively comprises:
determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set;
the determining, according to the plane calculation parameter corresponding to at least one pixel point in any one pixel set, the plane parameter of the plane corresponding to the pixel set includes:
and determining plane parameters of a plane corresponding to the pixel set according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any pixel set.
3. The method according to claim 2, wherein the dividing of the pixels located in the same plane among the plurality of pixels of the image to be processed into the same pixel set comprises:
determining plane association parameters between a plurality of pixel points of the image to be processed and a plane where the pixel points are located;
and dividing the pixels positioned on the same plane in the plurality of pixels into the same pixel set according to the plane association parameters respectively corresponding to the plurality of pixels.
4. The method according to claim 3, wherein the determining plane association parameters between the plurality of pixel points of the image to be processed and the plane in which the pixel points are located respectively comprises:
determining the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane where the pixel points are located;
the dividing, according to the plane association parameters respectively corresponding to the plurality of pixel points, the pixel points located on the same plane among the plurality of pixel points into the same pixel set includes:
and dividing the pixel points positioned on the same plane in the plurality of pixel points into the same pixel set according to the plane distribution probability and the plane embedding vector of the plurality of pixel points on the plane on which the pixel points are positioned.
5. The method of claim 4, wherein the determining the plane distribution probability and the plane embedding vector of each of the plurality of pixels on the plane thereof comprises:
extracting image features of the image to be processed;
inputting the image characteristics of the image to be processed into a machine learning model so as to obtain the plane distribution probability, the plane embedding vector, the pixel depth and the unit normal vector of the pixel plane corresponding to the pixel points of the image to be processed on the plane where the pixel points are respectively located;
the determining the pixel depth and the unit normal vector corresponding to at least one pixel point contained in the same pixel set respectively comprises:
and determining the pixel depth and the unit normal vector respectively corresponding to at least one pixel point contained in the same pixel set according to the pixel depth and the unit normal vector respectively corresponding to the plurality of pixel points.
6. The method of claim 5, wherein the machine learning model is obtained by training:
determining at least one training image; the method comprises the steps that a plurality of pixel points of any training image correspond to standard pixel depths and standard plane identifications and standard plane parameters of a standard plane where the pixel points are located respectively;
determining the standard distribution probability of the pixel points according to the standard plane identification of the standard plane in which any pixel point of any training image is positioned;
constructing a machine learning model;
and training to obtain model parameters of the machine learning model by using the standard distribution probability and the standard pixel depth which correspond to the plurality of pixel points of the at least one training image and any one training image respectively, and the standard plane identification and the standard plane parameters of the standard plane in which the plurality of pixel points are respectively located.
7. The method according to claim 6, wherein the training to obtain the model parameters of the machine learning model by using the standard distribution probability and the standard pixel depth corresponding to each of the at least one training image and the plurality of pixel points of any one training image, and the standard plane identifier and the standard plane parameters of the standard plane in which each of the plurality of pixel points is located comprises:
determining reference model parameters of the machine learning model;
inputting any training image into the machine learning model corresponding to the reference model parameter, and calculating to obtain an estimated distribution probability, an estimated embedding vector, an estimated pixel depth and an estimated plane parameter corresponding to a plurality of pixel points of the training image respectively so as to obtain an estimated distribution probability, an estimated embedding vector, an estimated pixel depth and an estimated plane parameter corresponding to a plurality of pixel points of the at least one training image respectively;
determining standard embedded vectors respectively corresponding to a plurality of standard planes of the training images aiming at estimation embedded vectors respectively corresponding to a plurality of pixel points of any training image and standard plane identifications of the standard planes respectively corresponding to the pixel points so as to obtain the standard embedded vectors respectively corresponding to the standard planes of at least one training image;
determining an estimation error of the at least one training image based on the estimated distribution probability and the standard distribution probability of a plurality of pixel points corresponding to the at least one training image, the estimated embedding vector, the estimated pixel depth and the standard pixel depth, and the standard embedding vector, the estimated plane parameter and the standard plane parameter corresponding to a plurality of standard planes of the at least one training image;
if the estimation error meets a loss condition, determining a reference model parameter of the machine learning model as a model parameter of the machine learning model;
and if the estimation error does not meet the loss condition, adjusting the reference model parameters of the machine learning model based on the estimation error, and returning to the step of determining the parameter model parameters of the machine learning model to continue execution.
8. The method of claim 7, wherein determining the estimation error of the at least one training image based on the estimated distribution probability and the standard distribution probability, the estimated embedding vector, the estimated pixel depth and the standard pixel depth of the plurality of pixel points corresponding to the at least one training image, and the standard embedding vector, the estimated plane parameter and the standard plane parameter corresponding to the plurality of standard planes of the at least one training image comprises:
aiming at any training image, determining the balance error of the training image according to the respective estimated distribution probability and standard distribution probability of a plurality of pixel points of the training image;
determining an embedding error of the training image according to the respective estimated embedding vectors of the plurality of pixel points of the training image and the respective standard embedding vectors of the plurality of standard planes of the training image;
determining the depth error of the training image according to the respective estimated pixel depth and standard pixel depth of a plurality of pixel points of the training image;
determining parameter errors of the training images according to respective estimated plane parameters and standard plane parameters of a plurality of pixel points of the training images;
taking the sum of the balance error, the embedding error, the depth error and the parameter error of the training image as the training error of the training image;
and calculating the sum of training errors respectively corresponding to the at least one training image to obtain the estimation error of the at least one training image.
9. The method of claim 8, wherein determining the embedding error of the training image based on the estimated embedding vector for each of the plurality of pixels of the training image and the standard embedding vector for each of the plurality of standard planes of the training image comprises:
taking the standard embedded vectors of a plurality of pixel points of the training image respectively corresponding to a standard plane as the standard embedded vectors respectively corresponding to the pixel points;
determining a first vector error between a plurality of pixel points of the training image and a plane where the pixel points are located according to the respective estimated embedded vectors and standard embedded vectors of the plurality of pixel points of the training image;
determining a second vector error between any two planes of the training image according to the respective standard embedded vectors of the plurality of standard planes of the training image;
obtaining an embedding error of the training image based on a difference between the first vector error and the second vector error.
10. The method according to claim 7, wherein the determining, for the estimated embedding vectors corresponding to the plurality of pixel points of any one of the training images and the standard plane identifications of the standard planes in which the plurality of pixel points are respectively located, the standard embedding vectors corresponding to the plurality of standard planes of the training image, respectively, to obtain the standard embedding vectors corresponding to the plurality of standard planes of the at least one training image, respectively, comprises:
aiming at any training image, dividing pixel points with the same standard plane identification in a plurality of pixel points of the training image into the same pixel set to obtain pixel sets corresponding to a plurality of standard planes of the training image respectively;
and respectively calculating the vector mean value of the estimated embedded vector of at least one pixel point in the pixel set respectively corresponding to the plurality of standard planes to obtain the standard embedded vectors corresponding to the distribution of the plurality of standard planes.
11. The method according to claim 6, wherein the determining the standard distribution probability of the pixel points according to the standard plane identifier of the standard plane in which any pixel point of any training image is located comprises:
if the standard plane mark of the standard plane where any pixel point is located meets the plane distribution condition, determining the standard distribution probability of the pixel point as a first distribution probability;
and if the standard plane mark of the standard plane where any pixel point is located does not meet the plane distribution condition, determining the standard distribution probability of the pixel point as a second distribution probability.
12. The method of claim 5, wherein the extracting image features of the image to be processed comprises:
and extracting the image features of the image to be processed based on a feature extraction algorithm.
13. The method of claim 4, wherein the dividing, according to the plane distribution probability and the plane embedding vector of the plurality of pixels on the plane where the plurality of pixels are located, the pixels located on the same plane among the plurality of pixels into the same pixel set comprises:
and respectively inputting the plane distribution probability and the plane embedded vector of the plurality of pixel points on the plane where the plurality of pixel points are located into a mean value clustering algorithm, and dividing the pixel points belonging to the same plane into the same pixel set.
14. The method of claim 13, wherein the step of classifying the pixels belonging to the same plane into the same pixel set by using the plane distribution probability and the plane embedded vector input mean value clustering algorithm of the pixels on the plane where the pixels are located respectively comprises:
dividing the plurality of pixel points into at least one clustering center point and a plurality of pixel points to be classified except the at least one clustering center point;
for any clustering center point, determining vector distances between the candidate pixel points and the clustering center point according to the plane distribution probability and the plane embedding vector of the candidate pixel points on the plane where the candidate pixel points are located and the plane distribution probability and the plane embedding vector of the clustering center point on the plane where the candidate pixel points are located;
dividing the candidate pixel points into pixel sets represented by the clustering center points with the highest plane correlation parameter according to the vector distance between the candidate pixel points and each clustering center point respectively to obtain at least one pixel set; each pixel set comprises a plurality of pixel points confirmed to belong to the same plane;
obtaining a central pixel point of at least one candidate pixel point in the at least one pixel set;
if at least one central pixel point and at least one clustering central point meet the convergence condition, determining at least one pixel set formed by dividing the same pixel point into the same pixel set in the plurality of pixel points;
if the at least one central pixel point and the at least one clustering center point do not meet the convergence condition, taking the at least one central pixel point as a new at least one clustering center point, and returning to the step of dividing the plurality of pixel points into the at least one clustering center point and a plurality of pixel points to be classified except the at least one clustering center point to continue to be executed.
15. The method of claim 14, wherein the number of the at least one cluster center points is determined by:
dividing the image to be processed according to a plurality of spatial grids to obtain a plurality of regional images;
extracting the area characteristics corresponding to the area images respectively;
dividing a plurality of regional images with regional feature similarity meeting similarity dividing conditions into the same regional set to obtain at least one regional set;
determining the number of the at least one region set as the number of the at least one cluster center point.
16. The method according to claim 15, wherein the extracting the region features corresponding to the plurality of region images respectively comprises:
respectively calculating the characteristic mean value of at least one pixel point of each of the plurality of regional images;
the dividing the region images of which the similarity of the region features meets the similarity dividing condition into the same region set to obtain at least one region set comprises:
calculating the characteristic distance between the respective characteristic mean values of any two area images in the plurality of area images;
dividing any two area images with the characteristic distance smaller than a distance threshold value into the same area set, and obtaining area sets corresponding to the area images respectively so as to obtain at least one area set.
17. The method according to claim 16, wherein the calculating the feature mean of at least one pixel point of each of the plurality of region images comprises:
aiming at least one pixel point corresponding to any regional image, calculating the average embedded vector of the estimated embedded vector corresponding to the at least one pixel point respectively to obtain the average embedded vector corresponding to the regional image;
and respectively taking the obtained average embedding vector corresponding to the at least one region image as the characteristic mean value of the at least one region image.
18. The method according to claim 2, wherein the determining, according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any one pixel set, the plane parameter of the plane corresponding to the pixel set comprises:
determining pixel plane parameters corresponding to at least one pixel point in any pixel set according to the pixel depth and the unit normal vector corresponding to the at least one pixel point respectively;
and carrying out weighted summation on the pixel plane parameters respectively corresponding to the at least one pixel point to obtain the plane parameters of the plane corresponding to the pixel set.
19. The method according to claim 18, wherein the determining, according to the pixel depth and the unit normal vector respectively corresponding to at least one pixel point in any one pixel set, the pixel plane parameter respectively corresponding to the at least one pixel point comprises:
according to the pixel depth and the unit normal vector corresponding to any pixel point in any pixel set, obtaining the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point;
and determining the unit normal vector and the pixel plane distance corresponding to the pixel point as the pixel plane parameters of the pixel point in the camera coordinate system, so as to obtain the pixel plane parameters of at least one pixel point in the pixel set in the camera coordinate system respectively.
20. The method of claim 19, wherein obtaining a pixel plane distance from a coordinate origin of a camera coordinate system to a pixel plane corresponding to the pixel point according to a pixel depth and a unit normal vector corresponding to any one pixel point in any one pixel set comprises:
determining an image coordinate point of any pixel point in any pixel set in an image coordinate system;
acquiring camera internal parameters of a camera for acquiring the image to be processed;
determining a camera coordinate point of the pixel point in the camera coordinate system according to the pixel depth and the image coordinate point corresponding to the pixel point and by combining the camera internal parameters;
and inputting the camera coordinate point corresponding to the pixel point and the unit normal vector into a plane distance calculation formula to obtain the pixel plane distance from the coordinate origin of the camera coordinate system to the pixel plane corresponding to the pixel point.
21. The method according to claim 19, wherein the performing weighted summation on the pixel plane parameters respectively corresponding to the at least one pixel point to obtain the plane parameters of the plane corresponding to the pixel set comprises:
carrying out weighted summation on unit normal vectors corresponding to the at least one pixel point respectively to obtain an average normal vector of the at least one pixel point;
carrying out weighted summation on the pixel plane distances corresponding to the at least one pixel point respectively to obtain an average plane distance corresponding to the at least one pixel point;
and determining the average normal vector and the average plane distance as plane parameters of the plane corresponding to the pixel set.
22. The method of claim 21, wherein the corresponding unit normal vector of any one pixel point comprises: a first value, a second value, and a third value;
the weighting and summing unit normal vectors respectively corresponding to the at least one pixel point to obtain an average normal vector of the at least one pixel point includes:
obtaining the evaluation weight of the at least one pixel point on the plane where the pixel point is located;
weighting and summing the first numerical value of the unit normal vector of at least one pixel point and the corresponding evaluation weight thereof to obtain a first target value;
carrying out weighted summation on the second numerical value of the unit normal vector of the at least one pixel point and the corresponding evaluation weight thereof to obtain a second target value;
weighting and summing a third numerical value of the unit normal vector of the at least one pixel point and the corresponding evaluation weight thereof to obtain a third target value;
and determining an average normal vector formed by the first target value, the second target value and the third target value.
23. An intelligent control method, comprising:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set;
determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set;
determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set;
obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set;
and intelligently controlling the electronic equipment for acquiring the image to be processed by utilizing the plane parameters respectively corresponding to at least one plane in the image to be processed.
24. An image processing apparatus characterized by comprising: a storage component and a processing component; the storage component is used for storing one or more computer instructions; the one or more computer instructions are invoked by the processing component;
the processing component is to:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; and obtaining the plane parameters of at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set.
25. An intelligent control device, comprising: the method comprises the following steps: a storage component and a processing component; the storage component is used for storing one or more computer instructions; the one or more computer instructions are invoked by the processing component;
the processing component is to:
dividing pixel points which are positioned on the same plane in a plurality of pixel points of an image to be processed into the same pixel set; determining plane calculation parameters respectively corresponding to at least one pixel point contained in the same pixel set; determining plane parameters of a plane corresponding to a pixel set according to plane calculation parameters respectively corresponding to at least one pixel point in any pixel set; obtaining plane parameters respectively corresponding to at least one plane in the image to be processed based on the plane parameters respectively corresponding to the at least one pixel set; and intelligently controlling the electronic equipment for acquiring the image to be processed by utilizing the plane parameters respectively corresponding to at least one plane in the image to be processed.
CN202010203443.6A 2020-03-20 2020-03-20 Image processing and intelligent control method and equipment Pending CN113435465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010203443.6A CN113435465A (en) 2020-03-20 2020-03-20 Image processing and intelligent control method and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010203443.6A CN113435465A (en) 2020-03-20 2020-03-20 Image processing and intelligent control method and equipment

Publications (1)

Publication Number Publication Date
CN113435465A true CN113435465A (en) 2021-09-24

Family

ID=77752480

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010203443.6A Pending CN113435465A (en) 2020-03-20 2020-03-20 Image processing and intelligent control method and equipment

Country Status (1)

Country Link
CN (1) CN113435465A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611648A (en) * 2023-12-04 2024-02-27 北京斯年智驾科技有限公司 Image depth estimation method, system and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102742274A (en) * 2010-02-05 2012-10-17 索尼公司 Image processing device and method
CN104252707A (en) * 2013-06-27 2014-12-31 株式会社理光 Object detecting method and device
CN108876906A (en) * 2018-06-06 2018-11-23 链家网(北京)科技有限公司 The method and device of virtual three-dimensional model is established based on the global plane optimizing of cloud
CN108986155A (en) * 2017-06-05 2018-12-11 富士通株式会社 The depth estimation method and estimation of Depth equipment of multi-view image
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data
CN110458805A (en) * 2019-03-26 2019-11-15 华为技术有限公司 Plane detection method, computing device and circuit system
CN110533663A (en) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 A kind of image parallactic determines method, apparatus, equipment and system
CN110807798A (en) * 2018-08-03 2020-02-18 华为技术有限公司 Image recognition method, system, related device and computer readable storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102742274A (en) * 2010-02-05 2012-10-17 索尼公司 Image processing device and method
CN104252707A (en) * 2013-06-27 2014-12-31 株式会社理光 Object detecting method and device
CN108986155A (en) * 2017-06-05 2018-12-11 富士通株式会社 The depth estimation method and estimation of Depth equipment of multi-view image
CN110533663A (en) * 2018-05-25 2019-12-03 杭州海康威视数字技术股份有限公司 A kind of image parallactic determines method, apparatus, equipment and system
CN108876906A (en) * 2018-06-06 2018-11-23 链家网(北京)科技有限公司 The method and device of virtual three-dimensional model is established based on the global plane optimizing of cloud
CN110807798A (en) * 2018-08-03 2020-02-18 华为技术有限公司 Image recognition method, system, related device and computer readable storage medium
CN110458805A (en) * 2019-03-26 2019-11-15 华为技术有限公司 Plane detection method, computing device and circuit system
CN110378196A (en) * 2019-05-29 2019-10-25 电子科技大学 A kind of road vision detection method of combination laser point cloud data

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117611648A (en) * 2023-12-04 2024-02-27 北京斯年智驾科技有限公司 Image depth estimation method, system and storage medium

Similar Documents

Publication Publication Date Title
CN110988912B (en) Road target and distance detection method, system and device for automatic driving vehicle
Krajník et al. A practical multirobot localization system
Lin et al. A fast, complete, point cloud based loop closure for LiDAR odometry and mapping
Brand et al. Submap matching for stereo-vision based indoor/outdoor SLAM
CN111429574A (en) Mobile robot positioning method and system based on three-dimensional point cloud and vision fusion
JP5800494B2 (en) Specific area selection device, specific area selection method, and program
Eppenberger et al. Leveraging stereo-camera data for real-time dynamic obstacle detection and tracking
Lin et al. Vision-based formation for UAVs
CN112578673B (en) Perception decision and tracking control method for multi-sensor fusion of formula-free racing car
Ji et al. RGB-D SLAM using vanishing point and door plate information in corridor environment
KR101460313B1 (en) Apparatus and method for robot localization using visual feature and geometric constraints
Rocha et al. Object recognition and pose estimation for industrial applications: A cascade system
Zelener et al. Cnn-based object segmentation in urban lidar with missing points
CA2785384C (en) Method for classifying objects in an imaging surveillance system
Lu et al. Knowing where I am: exploiting multi-task learning for multi-view indoor image-based localization.
EP3088983B1 (en) Moving object controller and program
CN113435465A (en) Image processing and intelligent control method and equipment
CN114830185A (en) Position determination by means of a neural network
Chai et al. ORB-SHOT SLAM: trajectory correction by 3D loop closing based on bag-of-visual-words (BoVW) model for RGB-D visual SLAM
EP4050510A1 (en) Object information calculation method and system
Jiménez Serrata et al. An intelligible implementation of FastSLAM2. 0 on a low-power embedded architecture
CN114821386A (en) Four-legged robot posture accurate estimation method based on multiple sight vectors
Andert et al. A fast and small 3-d obstacle model for autonomous applications
CN113963027B (en) Uncertainty detection model training method and device, and uncertainty detection method and device
Shetty et al. Multi Cost Function Fuzzy Stereo Matching Algorithm for Object Detection and Robot Motion Control

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210924

RJ01 Rejection of invention patent application after publication