Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a floating object spatial position determining method based on an in-vehicle monocular imaging device and a corresponding floating object spatial position determining apparatus based on an in-vehicle monocular imaging device, which overcome or at least partially solve the above problems.
In order to solve the above problem, an embodiment of the present invention discloses a method for determining a spatial position of a floating object based on a vehicle-mounted monocular imaging device, where the floating object is an object whose position is outside a ground plane, and the method includes:
when the vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by adopting the vehicle-mounted monocular camera equipment;
when the vehicle is located at a second position, acquiring a second environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment;
determining a first offset coordinate corresponding to the levitated object using the first environment image and determining a second offset coordinate corresponding to the levitated object using the second environment image;
acquiring the actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate;
and determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate.
Optionally, the step of determining a first offset coordinate corresponding to the floating object by using the first environment image and determining a second offset coordinate corresponding to the floating object by using the second environment image includes:
determining a first region containing the levitated object in the first environmental image and a second region containing the levitated object in the second environmental image;
projecting the first area to the second environment image to obtain a third area;
determining a first offset coordinate corresponding to the levitated object using the third region and determining a second offset coordinate corresponding to the levitated object using the second region.
Optionally, after the steps of determining a first region containing the floating object in the first environment image and determining a second region containing the floating object in the second environment image, the method further includes:
determining an object type of the levitated object based on the first region and the second region.
Optionally, after the step of determining the object type of the floating object based on the first region and the second region, the method further includes:
determining a first floating object in the first region and a second floating object in the second region.
Optionally, the step of determining a first offset coordinate corresponding to the floating object by using the third area and determining a second offset coordinate corresponding to the floating object by using the second area includes:
determining a second region and a third region which are matched with each other based on the object type of the floating object and the matched first floating object and second floating object; the third region includes a third levitating object corresponding to the first levitating object;
determining paired first and second offset coordinates based on the second and third hover objects.
Optionally, the method further comprises:
performing keypoint detection on the second region and the third region, determining a first keypoint corresponding to the suspended object in the second region, and determining a second keypoint corresponding to the suspended object in the third region;
and determining a corresponding third offset coordinate according to the first key point, and determining a corresponding fourth offset coordinate according to the second key point.
Optionally, the step of obtaining the actual three-dimensional space coordinate of the floating object by using the first offset coordinate and the second offset coordinate includes:
determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
determining a second actual three-dimensional space coordinate of the suspended object by adopting the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
The embodiment of the invention also discloses a device for determining the spatial position of a suspended object based on the vehicle-mounted monocular camera equipment, wherein the suspended object is an object with a position beyond the ground level, and the device comprises:
the first environment image acquisition module is used for acquiring a first environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment when the vehicle is positioned at a first position;
the second environment image acquisition module is used for acquiring a second environment image aiming at the suspended object by adopting the vehicle-mounted monocular camera equipment when the vehicle is positioned at a second position;
the offset coordinate determination module is used for determining a first offset coordinate corresponding to the floating object by adopting the first environment image and determining a second offset coordinate corresponding to the floating object by adopting the second environment image;
the actual three-dimensional space coordinate acquisition module is used for acquiring the actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate;
and the three-dimensional space position determining module is used for determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate.
The embodiment of the invention also discloses a vehicle, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform one or more methods as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform one or more of the methods described above.
The embodiment of the invention has the following advantages:
in the embodiment of the invention, when the vehicle is positioned at the first position, the vehicle-mounted monocular camera device is adopted to acquire the first environment image aiming at the suspended object, when the vehicle is positioned at the second position, the vehicle-mounted monocular camera device is adopted to acquire the second environment image aiming at the suspended object, the first offset coordinate corresponding to the suspended object is determined by adopting the first environment image, the second offset coordinate corresponding to the suspended object is determined by adopting the second environment image, the actual three-dimensional space coordinate of the suspended object is acquired by adopting the first offset coordinate and the second offset coordinate, and the three-dimensional space position of the suspended object is determined based on the actual three-dimensional space coordinate, so that the actual three-dimensional space position of the suspended object positioned outside the horizon can be acquired by image detection when the automobile is automatically driven, and the application range of vehicle image detection is improved.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below. It is to be understood that the embodiments described are only a few embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
One of the core ideas of the embodiment of the invention is that when a vehicle is located at a first position, a vehicle-mounted monocular camera device is used for acquiring a first environment image for a suspended object, when the vehicle is located at a second position, a vehicle-mounted monocular camera device is used for acquiring a second environment image for the suspended object, a first offset coordinate corresponding to the suspended object is determined by using the first environment image, a second offset coordinate corresponding to the suspended object is determined by using the second environment image, an actual three-dimensional space coordinate of the suspended object is acquired by using the first offset coordinate and the second offset coordinate, and the three-dimensional space position of the suspended object is determined based on the actual three-dimensional space coordinate, so that the actual three-dimensional space position of the suspended object located outside the ground can be acquired by image detection when the automobile is automatically driven, and the application range of vehicle image detection is improved.
Referring to fig. 1, a flowchart of steps of an embodiment of a method for determining a spatial position of a floating object based on a vehicle-mounted monocular imaging device according to the present invention is shown, where the floating object is an object whose position is outside of a ground plane, and specifically the method may include the following steps:
step 101, when a vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by using the vehicle-mounted monocular camera equipment;
specifically, the monocular camera device refers to a camera device having one camera, and the floating object refers to an object located outside the ground plane, such as a traffic light, and a billboard hung at a high place.
In this embodiment, when the vehicle is in a moving state, and the vehicle travels to a certain position, the vehicle-mounted monocular imaging device is used to capture images of the floating object, so as to obtain a first environment image.
Step 102, when the vehicle is located at a second position, acquiring a second environment image aiming at a suspended object by adopting the vehicle-mounted monocular camera equipment;
in order to obtain two images which are shot aiming at a floating object and have different shooting angles and shooting distances, after the first environment image is shot, when a vehicle runs for a certain distance to reach a second position which is away from the position where the first environment image is shot, the distance between the vehicle-mounted monocular camera equipment and the floating object is changed, and at the moment, the vehicle-mounted monocular camera equipment is used for shooting aiming at the floating object to obtain a second environment image. As an example, the shooting interval time may be preset in the vehicle, when the vehicle is in a moving state and starts timing after shooting the first environment image, and when the timing reaches the shooting interval time, the vehicle moves to the second position, so that an angle and a distance between the vehicle and the floating object are changed compared with those when shooting the first environment image, and at this time, the second environment image is shot, so as to obtain the first environment image and the second environment image with different shooting angles and shooting distances.
103, determining a first offset coordinate corresponding to the floating object by using the first environment image, and determining a second offset coordinate corresponding to the floating object by using the second environment image;
the method comprises the steps of respectively carrying out image processing on a first environment image and a second environment image, determining a floating object in the images and offset coordinates corresponding to the floating object, wherein the coordinates of the floating object in the environment images have certain offset compared with actual space coordinates due to the fact that the shot images are subjected to coordinate change from a world coordinate system to a camera coordinate system and conversion from the camera coordinate system to pixel points, and obtaining the first offset coordinates of the floating object in the first environment image and the second offset coordinates in the second environment image. As an example, determining the floating object in the first environment image and the first offset coordinate corresponding to the floating object, determining the floating object in the second environment and the second offset coordinate corresponding to the floating object may be performed by a recognition model trained in advance and obtained through deep learning, and after the captured first environment image and the captured second environment image are input into the recognition model, the recognized floating object and the offset coordinate are output by the recognition model.
104, acquiring an actual three-dimensional space coordinate of the suspended object by using the first offset coordinate and the second offset coordinate;
the first offset coordinate and the second offset coordinate correspond to one same actual three-dimensional space coordinate in the same suspension object, so that a corresponding relation exists between the first offset coordinate and the second offset coordinate, and the actual three-dimensional space coordinate of the suspension object can be obtained through calculation by combining camera internal parameters, camera external parameters of a camera and translation parameters and rotation parameters of a vehicle through the first offset coordinate and the second offset coordinate.
And 105, determining the three-dimensional space position of the suspended object based on the actual three-dimensional space coordinate.
Specifically, after the actual three-dimensional space coordinates of the floating object are obtained, the actual three-dimensional space position of the floating object in the real world can be determined based on the three-dimensional space coordinates.
In the embodiment of the invention, when the vehicle is located at the first position, the vehicle-mounted monocular camera device is used for acquiring the first environment image aiming at the suspended object, when the vehicle is located at the second position, the vehicle-mounted monocular camera device is used for acquiring the second environment image aiming at the suspended object, the first offset coordinate corresponding to the suspended object is determined by using the first environment image, the second offset coordinate corresponding to the suspended object is determined by using the second environment image, the actual three-dimensional space coordinate of the suspended object is acquired by using the first offset coordinate and the second offset coordinate, and the three-dimensional space position of the suspended object is determined based on the actual three-dimensional space coordinate, so that the actual three-dimensional space position of the suspended object located outside the horizon can be acquired by image detection when the automobile is automatically driven, and the application range of vehicle image detection is improved.
Referring to fig. 2, a flowchart illustrating steps of another embodiment of a method for determining a spatial position of a floating object based on a vehicle-mounted monocular imaging device according to the present invention is shown, where the floating object is an object whose position is outside a ground plane, and the method specifically includes the following steps:
step 201, when a vehicle is located at a first position, acquiring a first environment image aiming at a suspended object by using the vehicle-mounted monocular camera equipment;
step 202, when the vehicle is located at a second position, acquiring a second environment image aiming at a suspended object by using the vehicle-mounted monocular camera equipment;
since steps 201 and 202 are similar to steps 101 and 102 in the previous embodiment, reference may be made to step 101 and step 102 in the previous embodiment for detailed description, and this embodiment is not described herein again.
Step 203, determining a first area containing the floating object in the first environment image, and determining a second area containing the floating object in the second environment image;
specifically, a bounding box (bbox) may be used to perform image processing on the first environment image and the second environment image, and after the image processing, a first area and a second area including the floating object may be framed in the first environment image and the second environment image, respectively.
Step 204, determining the object type of the floating object based on the first area and the second area;
the floating objects can be birds flying in the air, roadside traffic lights and advertising boards, and different floating objects have different appearance characteristics, so that the floating objects can be classified, for example, the classification can be animal-bird classification and traffic facility-traffic light classification, and the classification (class) of a first area and a second area containing the floating objects is carried out, so that the types of the floating objects are obtained.
Step 205, determining a first floating object in the first area and a second floating object in the second area;
specifically, after the type of the floating object is determined, mask processing may be performed on the first region and the second region, respectively, and after the mask processing, all pixel points that belong to the floating object and constitute the first floating object may be determined in the first region, and all pixel points that belong to the floating object and constitute the second floating object may be determined in the second region.
Step 206, projecting the first area to the second environment image to obtain a third area;
specifically, when the vehicle shoots a first environment image and a second environment image, rotation parameters and translation parameters corresponding to rotation and translation of the vehicle relative to a floating object can be obtained through a high-precision vehicle odometer arranged on the vehicle, and the vehicle-mounted monocular camera equipment is fixed on the vehicle, so that the vehicle-mounted unit camera equipment also rotates and translates along with the vehicle, and a first area in the first environment image is projected to the second environment image according to the rotation parameters and the translation parameters to obtain a third area.
Step 207, determining a first offset coordinate corresponding to the floating object by using the third area, and determining a second offset coordinate corresponding to the floating object by using the second area.
The third area is the projection of the first area, and the first area and the second area both contain the suspension object, so that the projected third area is matched with the second area, and after the matching is successful, the matched characteristic points in the first suspension object and the second suspension object can be obtained from the third area and the second area, and the coordinates of the characteristic points, namely the first offset coordinate and the second offset coordinate, are determined.
In an optional embodiment of the present invention, the step of determining a first offset coordinate corresponding to the floating object by using the third region and determining a second offset coordinate corresponding to the floating object by using the second region further includes the following sub-steps:
determining a second region and a third region which are matched with each other based on the object type of the floating object and the matched first floating object and second floating object; the third region includes a third levitating object corresponding to the first levitating object;
determining paired first and second offset coordinates based on the second and third hover objects.
Specifically, when the first region is projected onto the second environmental image, because the shooting angles are different, although both the third region and the second region obtained after projection are regions including the floating object, there is parallax between the two regions, and there is an overlapping partial region, and because the types of the first region and the second region when classified are the same, the second region and the third region including the same floating object can be matched according to the same type of object and two conditions of the overlapping region, the third region includes the third floating object, because the third region is the projection of the first region, the third floating object is the projection of the first floating object, and because the third region is matched with the second region, there is a corresponding relationship between the third floating object and the second floating object, and the coordinate values corresponding to the matched pixel points in the third floating object and the second floating object are calculated, so that the first offset coordinate and the second offset coordinate are matched. For example, if there is a pixel point a in the third suspension object and corresponds to a pixel point B in the second suspension object, the coordinate value of the pixel point a is the first offset coordinate, and the coordinate value of the pixel point B is the second offset coordinate.
In an optional embodiment of the invention, the method further comprises:
performing keypoint detection on the second region and the third region, determining a first keypoint corresponding to the suspended object in the second region, and determining a second keypoint corresponding to the suspended object in the third region;
and determining a corresponding third offset coordinate according to the first key point, and determining a corresponding fourth offset coordinate according to the second key point.
Besides calculating matched pixel points through paired second and third suspension objects, key point detection can be directly performed on the second and third regions, first and second key points corresponding to the suspension objects are respectively obtained in the second and third regions through key point detection, and due to the fact that the suspension objects are the same, the first key points obtained in the second region correspond to the second key points obtained in the third region, and third offset coordinates corresponding to the first key points in the second region and fourth offset coordinates corresponding to the second key points in the third region are respectively determined. As an example, bbox processing, cls processing, mask processing, and key point detection performed on the environment image may all be completed by a pre-trained processing model, which may be obtained by deep learning, and after the acquired first environment image and second environment image are input into the processing model, the processing model may automatically perform processing, and output image results that are subjected to bbox processing, cls processing, mask processing, and key point detection.
And 208, acquiring the actual three-dimensional space coordinate of the suspended object by using the first offset coordinate and the second offset coordinate.
In an optional embodiment of the present invention, the step of obtaining the actual three-dimensional space coordinate of the floating object by using the first offset coordinate and the second offset coordinate further includes:
determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
determining a second actual three-dimensional space coordinate of the suspended object by adopting the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
Specifically, in order to determine the actual three-dimensional space coordinate finally more accurately, the actual three-dimensional space coordinate of the floating object may be calculated by using the first offset coordinate, the second offset coordinate, the camera external parameter, the camera internal parameter of the camera, and the rotation parameter and the translation parameter of the vehicle, according to a triangulation principle, and the actual three-dimensional space coordinate of the floating object may be calculated by using the third offset coordinate, the fourth offset coordinate, the camera external parameter, the camera internal parameter, the rotation parameter and the translation parameter of the vehicle, according to a triangulation principle, and then the actual three-dimensional space coordinate of the floating object may be obtained by weighting the two sets of calculated actual three-dimensional space coordinates.
Step 209, determining the three-dimensional space position of the levitated object based on the actual three-dimensional space coordinates.
After the actual three-dimensional space coordinate of the suspended object is obtained, the three-dimensional space position of the suspended object in the real world can be determined according to the actual three-dimensional space coordinate.
In the embodiment of the invention, when the vehicle is located at a first position, a first environment image for the floating object is acquired by using the vehicle-mounted monocular camera device, when the vehicle is located at a second position, a second environment image for the floating object is acquired by using the vehicle-mounted monocular camera device, a first area containing the floating object is determined in the first environment image, a second area containing the floating object is determined in the second environment image, the object type of the floating object is determined based on the first area and the second area, the first floating object in the first area and the second floating object in the second area are determined, the first area is projected to the second environment image to obtain a third area, a first offset coordinate corresponding to the floating object is determined by using the third area, a second offset coordinate corresponding to the floating object is determined by using the second area, key point detection is performed on the second area and the third area, a first three-dimensional key point corresponding to the floating object is determined in the second area, a second key point corresponding to the floating object is determined in the third area, a third three-dimensional key point corresponding to the floating object is determined according to the principle, a first offset coordinate point corresponding to the floating object and a three-dimensional coordinate of the floating object are determined by using the principle, a three-dimensional coordinate offset coordinate of the three-dimensional coordinate of the floating object, a three-dimensional coordinate of the floating object is determined by using the three-dimensional coordinate of the three-dimensional coordinate measuring principle, and the determined actual three-dimensional space position is more accurate.
Referring to fig. 3, a block diagram of a structure of an embodiment of the apparatus for determining a spatial position of a floating object based on a vehicle-mounted monocular camera device according to the present invention is shown, where the floating object is an object whose position is outside of a ground plane, and specifically includes the following modules:
the first environment image acquisition module 301 is configured to acquire a first environment image for a floating object by using the vehicle-mounted monocular camera device when a vehicle is located at a first position;
a second environment image obtaining module 302, configured to obtain, by using the vehicle-mounted monocular camera device, a second environment image for the floating object when the vehicle is located at a second position;
an offset coordinate determination module 303, configured to determine a first offset coordinate corresponding to the floating object by using the first environment image, and determine a second offset coordinate corresponding to the floating object by using the second environment image;
an actual three-dimensional space coordinate obtaining module 304, configured to obtain an actual three-dimensional space coordinate of the floating object by using the first offset coordinate and the second offset coordinate;
a three-dimensional space position determining module 305 for determining a three-dimensional space position of the levitated object based on the actual three-dimensional space coordinates.
In an embodiment of the present invention, the offset coordinate determining module 303 includes:
a region determination submodule for determining a first region containing the levitated object in the first environmental image and a second region containing the levitated object in the second environmental image;
the projection submodule is used for projecting the first area to the second environment image to obtain a third area;
and the determining submodule is used for determining a first offset coordinate corresponding to the suspension object by adopting the third area and determining a second offset coordinate corresponding to the suspension object by adopting the second area.
In an embodiment of the present invention, the apparatus further includes:
and the object type determining module is used for determining the object type of the suspended object based on the first area and the second area.
In an embodiment of the present invention, the apparatus further includes:
and the suspended object determining module is used for determining a first suspended object in the first area and a second suspended object in the second area.
In an embodiment of the present invention, the determining sub-module further includes:
a second and third region determining unit configured to determine a second region and a third region that match each other based on the object type of the levitating object and the matched first levitating object and second levitating object; the third region includes a third levitating object corresponding to the first levitating object;
a first offset coordinate and second offset coordinate determination unit configured to determine a pair of first offset coordinates and second offset coordinates based on the second floating object and the third floating object.
In an embodiment of the present invention, the apparatus further includes:
a key point determining module, configured to perform key point detection on the second region and the third region, determine a first key point corresponding to the floating object in the second region, and determine a second key point corresponding to the floating object in the third region;
and the offset coordinate acquisition module is used for determining a corresponding third offset coordinate according to the first key point and determining a corresponding fourth offset coordinate according to the second key point.
In an embodiment of the present invention, the actual three-dimensional space coordinate obtaining module 304 includes:
the first actual three-dimensional space coordinate determination submodule is used for determining a first actual three-dimensional space coordinate of the suspended object by adopting the first offset coordinate and the second offset coordinate through a triangulation principle;
a second actual three-dimensional space coordinate determination submodule, configured to determine a second actual three-dimensional space coordinate of the suspended object by using the third offset coordinate and the fourth offset coordinate through a triangulation principle;
and the target actual three-dimensional space coordinate determination submodule is used for weighting the first actual three-dimensional space coordinate and the second actual three-dimensional space coordinate to obtain a target actual three-dimensional space coordinate of the suspended object.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The embodiment of the invention also discloses a vehicle, which comprises:
one or more processors; and
one or more machine readable media having instructions stored thereon that, when executed by the one or more processors, cause the vehicle to perform one or more methods as described above.
Embodiments of the invention also disclose one or more machine-readable media having instructions stored thereon, which when executed by one or more processors, cause the processors to perform one or more of the methods described above.
The embodiments in the present specification are all described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same and similar between the embodiments may be referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, terminal devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing terminal to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing terminal to cause a series of operational steps to be performed on the computer or other programmable terminal to produce a computer implemented process such that the instructions which execute on the computer or other programmable terminal provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrases "comprising one of \ 8230; \8230;" does not exclude the presence of additional like elements in a process, method, article, or terminal device that comprises the element.
The method for determining the spatial position of a floating object based on a vehicle-mounted monocular camera device, the device for determining the spatial position of a floating object based on a vehicle-mounted monocular camera device, a vehicle and a readable medium provided by the invention are described in detail, and specific examples are applied to the detailed description to explain the principle and the implementation mode of the invention, and the description of the above embodiments is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.