CN113850881A - Image generation method, device, equipment and readable storage medium - Google Patents

Image generation method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN113850881A
CN113850881A CN202111016477.5A CN202111016477A CN113850881A CN 113850881 A CN113850881 A CN 113850881A CN 202111016477 A CN202111016477 A CN 202111016477A CN 113850881 A CN113850881 A CN 113850881A
Authority
CN
China
Prior art keywords
panoramic image
image
panoramic
vehicle bottom
position mapping
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111016477.5A
Other languages
Chinese (zh)
Inventor
李续贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ecarx Hubei Tech Co Ltd
Original Assignee
Hubei Ecarx Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei Ecarx Technology Co Ltd filed Critical Hubei Ecarx Technology Co Ltd
Priority to CN202111016477.5A priority Critical patent/CN113850881A/en
Publication of CN113850881A publication Critical patent/CN113850881A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0007Image acquisition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/61Scene description

Abstract

The invention provides an image generation method, an image generation device, image generation equipment and a readable storage medium, wherein the image generation method comprises the following steps: acquiring a second panoramic image acquired currently and a first panoramic image acquired last time; obtaining the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively; calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image; and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image. The invention realizes the monitoring of the vehicle bottom condition according to the vehicle bottom image, meets the parking requirement, has small requirement on calculated amount to a certain extent and has strong anti-interference performance.

Description

Image generation method, device, equipment and readable storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image generation method, an image generation device, an image generation apparatus, and a readable storage medium.
Background
With the development of automobile technology, vehicle-mounted panoramic all-around vision systems AVM and automatic parking systems are more and more widely applied. In the use process of the AVM and the automatic parking system, the conventional panoramic all-around system generally can only display images around the vehicle, including a bird's-eye view and a 3D rotation view, but cannot display images at the bottom of the vehicle, so that the requirement of a part of parking scenes cannot be met, and the user experience is poor. In the prior art, in order to solve the requirement, a vehicle bottom image is generated based on a vehicle speed and a steering wheel angle, but the method depends on vehicle body size information and vehicle body motion information, has a large calculation amount and is greatly influenced by the outside, has a high requirement on calculation power and weak robustness and anti-interference performance, and if the information has errors, the generated vehicle bottom image has a poor effect and affects the use of a user.
Disclosure of Invention
The invention mainly aims to provide an image generation method, an image generation device, image generation equipment and a readable storage medium, and aims to solve the technical problems that in the prior art, a vehicle-mounted panoramic all-around system cannot monitor images of the bottom of a vehicle and cannot confirm the condition of the bottom of the vehicle.
In a first aspect, the present invention provides an image generation method, including:
acquiring a second panoramic image acquired currently and a first panoramic image acquired last time;
obtaining the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively;
calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Optionally, the step of obtaining the coordinates of the at least 3 feature points respectively corresponding to the first panoramic image and the second panoramic image includes:
extracting a first point set in the region of interest of the first panoramic image based on an ORB algorithm, and extracting a second point set in the region of interest of the second panoramic image based on the ORB algorithm;
calculating a first descriptor matrix of the first point set, and calculating a second descriptor matrix of the second point set;
matching the first descriptor matrix and the second descriptor matrix, and deleting the matching point combination with the matching value not meeting the threshold value based on the matching result;
selecting at least three effective matching point combinations from the remaining matching point combinations, wherein each effective matching point combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching point combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Optionally, the step of obtaining the coordinates of the at least 3 feature points respectively corresponding to the first panoramic image and the second panoramic image further includes:
selecting N points in an interesting area of a first panoramic image, and taking a preset pixel area with each point as a center as a first template to obtain N first templates;
selecting N points in an interesting area of a second panoramic image, and taking a preset pixel area with each point as a center as a second template to obtain N second templates;
matching the N first templates with the N second templates to obtain at least three effective matching template combinations, wherein each effective matching template combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching template combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Optionally, the region of interest is close to the vehicle bottom region, and the area of the region of interest is smaller than a preset area.
Optionally, the step of searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relationship, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image includes:
detecting whether the position mapping relation is effective or not;
if the position mapping relation is valid, searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Optionally, the step of detecting whether the position mapping relationship is valid includes:
determining a rotation angle and a comprehensive translation amount according to the position mapping relation;
detecting whether the rotation angle is in a preset rotation angle range or not and detecting whether the comprehensive translation amount is in a preset comprehensive translation amount range or not;
and if the rotation angle is within the preset rotation angle range and the comprehensive translation amount is within the preset comprehensive translation amount range, determining that the position mapping relationship is valid, otherwise, determining that the position mapping relationship is invalid.
Optionally, after the step of detecting whether the position mapping relationship is valid, the method includes:
if the position mapping relation is invalid, obtaining the valid position mapping relation determined last time;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the last determined effective position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
In a second aspect, the present invention also provides an image generating apparatus comprising:
the first acquisition module is used for acquiring a second panoramic image acquired currently and a first panoramic image acquired last time;
the second acquisition module is used for acquiring the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively;
the calculation module is used for calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image;
and the splicing module is used for searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Optionally, the obtaining of the coordinates of the at least 3 feature points respectively corresponding to the first panoramic image and the second panoramic image is performed by a second obtaining module, configured to:
extracting a first point set in the region of interest of the first panoramic image based on an ORB algorithm, and extracting a second point set in the region of interest of the second panoramic image based on the ORB algorithm;
calculating a first descriptor matrix of the first point set, and calculating a second descriptor matrix of the second point set;
matching the first descriptor matrix and the second descriptor matrix, and deleting the matching point combination with the matching value not meeting the threshold value based on the matching result;
selecting at least three effective matching point combinations from the remaining matching point combinations, wherein each effective matching point combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching point combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Optionally, the obtaining of the coordinates of the at least 3 feature points respectively corresponding to the first panoramic image and the second panoramic image is further configured to:
selecting N points in an interesting area of a first panoramic image, and taking a preset pixel area with each point as a center as a first template to obtain N first templates;
selecting N points in an interesting area of a second panoramic image, and taking a preset pixel area with each point as a center as a second template to obtain N second templates;
matching the N first templates with the N second templates to obtain at least three effective matching template combinations, wherein each effective matching template combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching template combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Optionally, the method further includes searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relationship, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image, where the image generating apparatus further includes a detection module, configured to:
detecting whether the position mapping relation is effective or not;
if the position mapping relation is valid, searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Optionally, the detecting module is further configured to:
determining a rotation angle and a comprehensive translation amount according to the position mapping relation;
detecting whether the rotation angle is in a preset rotation angle range or not and detecting whether the comprehensive translation amount is in a preset comprehensive translation amount range or not;
and if the rotation angle is within the preset rotation angle range and the comprehensive translation amount is within the preset comprehensive translation amount range, determining that the position mapping relationship is valid, otherwise, determining that the position mapping relationship is invalid.
Optionally, after the step of detecting whether the position mapping relationship is valid, the splicing module is further configured to:
if the position mapping relation is invalid, obtaining the valid position mapping relation determined last time;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the last determined effective position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
In a third aspect, the present invention also provides an image generation apparatus comprising a processor, a memory, and an image generation program stored on the memory and executable by the processor, wherein the image generation program, when executed by the processor, implements the steps of the image generation method as described above.
In a fourth aspect, the present invention further provides a readable storage medium, on which an image generation program is stored, wherein the image generation program, when executed by a processor, implements the steps of the image generation method as described above.
In the invention, a second panoramic image collected currently and a first panoramic image collected last time are obtained; obtaining the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively; calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image; and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image. The invention can solve the problems that the existing vehicle-mounted panoramic all-round viewing system can not monitor the vehicle bottom image and can not confirm the vehicle bottom condition, realizes the monitoring of the vehicle bottom condition according to the vehicle bottom image, meets the requirements of parking, has small requirements on calculated amount to a certain extent and has strong anti-interference performance.
Drawings
Fig. 1 is a schematic diagram of a hardware configuration of an image generating apparatus according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating an image generating method according to an embodiment of the present invention;
FIG. 3 is a schematic view of a panoramic image according to an embodiment of an image generating method of the present invention;
FIG. 4 is a schematic diagram of a region of interest according to an embodiment of the image generation method of the present invention;
FIG. 5 is a diagram illustrating a bottom image generated according to an embodiment of the image generating method of the present invention;
fig. 6 is a functional block diagram of an image generating apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In a first aspect, an embodiment of the present invention provides an image generation apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of a hardware structure of an image generating apparatus according to an embodiment of the present invention. In an embodiment of the present invention, the image generating apparatus may include a processor 1001 (e.g., a Central Processing Unit (CPU)), a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. The communication bus 1002 is used for realizing connection communication among the components; the user interface 1003 may include a Display screen (Display), an input unit such as a Keyboard (Keyboard); the network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WI-FI interface, WI-FI interface); the memory 1005 may be a Random Access Memory (RAM) or a non-volatile memory (non-volatile memory), such as a magnetic disk memory, and the memory 1005 may optionally be a storage device independent of the processor 1001. Those skilled in the art will appreciate that the hardware configuration depicted in FIG. 1 is not intended to be limiting of the present invention, and may include more or less components than those shown, or some components in combination, or a different arrangement of components.
With continued reference to FIG. 1, the memory 1005 of FIG. 1, which is one type of computer storage medium, may include an operating system, a network communication module, a user interface module, and an image generation program therein. The processor 1001 may call an image generation program stored in the memory 1005, and execute the image generation method provided by the embodiment of the present invention.
The embodiment of the invention provides an image generation method.
Referring to fig. 2, a flowchart of an embodiment of an image generation method is schematically illustrated.
In this embodiment, the image generation method includes:
step S10, acquiring a second panoramic image acquired currently and a first panoramic image acquired last time;
referring to fig. 3, fig. 3 is a schematic view of a panoramic image according to an embodiment of the image generation method of the present invention.
In this embodiment, when a user clicks on the panoramic system interface to enter the vehicle bottom transparent mode, the bottom image generation program starts to run, the vehicle panoramic all-around system AVM acquires original images through one camera in each of the front, rear, left and right directions of the vehicle body, and splices the original images to obtain panoramic images around the vehicle body, and stores the currently spliced panoramic images. Obtaining panoramic images acquired in two consecutive times, solving the position mapping relation of the panoramic images acquired in two consecutive times to obtain a vehicle bottom image in the panoramic image acquired in the last time in the two consecutive times, namely obtaining a second panoramic image acquired currently and a first panoramic image acquired last time, calculating the position mapping relation of the second panoramic image acquired currently and the first panoramic image acquired last time to generate a vehicle bottom image in the second panoramic image acquired currently as a bottom image generation program running period, entering the next running period if the bottom image generation program is still running after the vehicle bottom image in the second panoramic image in the running period is generated, and obtaining the panoramic image acquired at the current moment in the current running period as the second panoramic image and the second panoramic image filled with the generated vehicle bottom image in the last running period as the second panoramic image of the current running period A first panoramic image. The panoramic image is shown in fig. 3, a shaded part T is shown to represent a visual area acquired and spliced by 4 original cameras, a blank part K is shown to represent a vehicle bottom image, in actual display, an initial vehicle bottom image in a first panoramic image obtained when a bottom image generation program starts to run is invisible and displayed in a preset color such as grey, when a bottom image of a second panoramic image is obtained according to the first panoramic image and is filled in the vehicle bottom area of the second panoramic image, and the panoramic image acquired each time later comprises the visual area acquired and spliced by the cameras, the filled visual vehicle bottom area and the invisible vehicle bottom area displayed in the preset color such as grey.
Step S20, obtaining the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively;
in this embodiment, when the second panoramic image at the current time and the first panoramic image at the previous time in the operating cycle are obtained, common pixels in the images are searched from the first panoramic image and the second panoramic image, and the position mapping relationship between the first panoramic image and the second panoramic image is calculated according to different coordinates of the common pixels in the first panoramic image and the second panoramic image, where the common pixels in the two images are feature points, where the different coordinates of the feature points in the first panoramic image and the second panoramic image are obtained based on a local coordinate system, the local coordinate system is a pixel coordinate system of the first panoramic image and the second panoramic image, the corresponding point of the first panoramic image or the second panoramic image is taken as an origin, and the corner point may be an upper left corner, a lower corner, a left corner, a right corner, a left corner, a right corner, a left corner, a top, a left corner, a top, a right corner, a left corner, a top, a bottom, a top, a bottom, a top, a bottom, a top, a bottom, a top, a bottom, a top, a bottom, And the straight lines extending out from the corner points along two sides of the first panoramic image or the second panoramic image are taken as an X axis and a Y axis. The process of searching the feature points comprises feature extraction and feature matching, different feature extractions have different corresponding feature matching algorithms, and the feature matching algorithms comprise a feature-based matching method, a template-based matching method and the like. When the coordinates corresponding to at least 2 feature points in the first panoramic image and the second panoramic image are found, the position mapping relationship between the first panoramic image and the second panoramic image can be calculated, but in order to ensure the accuracy of the obtained position mapping relationship, at least 3 coordinates corresponding to the feature points in the first panoramic image and the second panoramic image need to be found.
Further, in an optional embodiment, the step S20 includes:
extracting a first point set in the region of interest of the first panoramic image based on an ORB algorithm, and extracting a second point set in the region of interest of the second panoramic image based on the ORB algorithm;
calculating a first descriptor matrix of the first point set, and calculating a second descriptor matrix of the second point set;
matching the first descriptor matrix and the second descriptor matrix, and deleting the matching point combination with the matching value not meeting the threshold value based on the matching result;
selecting at least three effective matching point combinations from the remaining matching point combinations, wherein each effective matching point combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching point combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
In this embodiment, a first point set is extracted from an area of interest of a first panoramic image and a second point set is extracted from an area of interest of a second panoramic image by an ORB algorithm, when feature extraction is performed by the ORB algorithm to obtain the first point set and the second point set, coordinates of each point in the obtained first point set and the obtained second point set on the same global coordinate system can be calculated to obtain a first descriptor matrix of the first point set and a second descriptor matrix of the second point set, and feature matching can be performed by calculating a distance between each point described in the first descriptor matrix and a coordinate of each point described in the second descriptor matrix on the same global coordinate system, where the distance between the coordinates of the two points is determined according to the first panoramic image in each operating cycle, based on the center of the vehicle body in the first panoramic image as an origin, the axial direction of the vehicle body is taken as an X axis, and the orientation of the vehicle body is taken as a Y axis. If the distance between the two coordinates is larger than a preset threshold value, the two coordinates are judged to be not in accordance with the requirement of the matching value and cannot be used as a valid matching point combination, and therefore the matching point combination with the matching value not satisfying the threshold value is deleted. And finally, selecting at least three effective matching point combinations from the matching point combinations with the residual matching values meeting the threshold value to calculate the position mapping relation between the first panoramic image and the second panoramic image when all the points described by the first descriptor matrix of the first point set and the second descriptor matrix of the second point set are matched, so that the precision is ensured, and the requirement on the calculation force is reduced. Each effective matching point combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent. And taking the local coordinates of the first matching point in the first panoramic image and the local coordinates of the second matching point in the second panoramic image of each effective matching point combination as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Further, in an optional embodiment, the step S20 further includes:
selecting N points in an interesting area of a first panoramic image, and taking a preset pixel area with each point as a center as a first template to obtain N first templates;
selecting N points in an interesting area of a second panoramic image, and taking a preset pixel area with each point as a center as a second template to obtain N second templates;
matching the N first templates with the N second templates to obtain at least three effective matching template combinations, wherein each effective matching template combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching template combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
In this embodiment, a matching method based on features is provided, which is different from the above-mentioned matching method based on features, and is based on template matching, wherein N fixed points are selected in the regions of interest of the first panoramic image and the second panoramic image respectively, can be selected at fixed intervals, and a preset pixel area taking each point of the N points in the first panoramic image as a center is taken as a first template to obtain N first templates, by analogy, the preset pixel area taking each point in the N points in the second panoramic image as the center is taken as a second template to obtain N second templates, at least three effective matching template combinations are obtained according to the template matching technology, each effective matching template combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, wherein the global coordinates of the first matching point and the second matching point are consistent. And taking the local coordinates of the first matching point of each effective matching template combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively. In order to meet the requirement of accuracy and reduce the calculation amount, 15-20 points are generally selected, an image template obtained by taking the points as a center is not too large or too small, the template is too large, the image processing calculation amount is large, and the requirement on embedded system hardware is too high; the template is too small, the characteristics are too small, the matching accuracy rate is greatly reduced, and the template cannot be detected, so that the template with 30 multiplied by 30 pixels can be selected by taking the points as the center according to the AVM splicing frame rate and the vehicle speed condition.
Further, in an optional embodiment, the region of interest is close to a vehicle bottom region, and the area of the region of interest is smaller than a preset area.
Referring to fig. 4, fig. 4 is a schematic diagram of a region of interest according to an embodiment of the image generation method of the present invention.
In this embodiment, as shown in fig. 4, the region of interest M is located near the periphery of the vehicle bottom region in the panorama, and the region of interest M is selected according to the computational power of the platform and the scene requirement, the area of the region of interest M is preset based on the computational power requirement of the platform and the scene requirement, the position coordinates of the corner point in the region of interest M are determined, and a region of interest smaller than the area of the preset region of interest M is selected according to the position coordinates of the corner point. Compared with the prior art that all the regions of the whole panoramic image are selected, a part of the region of interest M is selected to obtain the characteristic point combination, and the requirement on the whole computing capacity is reduced.
Step S30, calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image;
in this embodiment, according to the coordinates of the obtained at least 3 feature points in the first panoramic image and the second panoramic image, respectively, a transformation relationship from the first panoramic image to the second panoramic image can be obtained, and when the vehicle moves forward or backward, the transformation relationship is a translation transformation; when the vehicle turns, the transformation relation is a combination of translation and rotation, therefore, the transformation relation of the panoramic image of the vehicle from the first panoramic image to the second panoramic image can be approximate to a rotation and translation transformation relation, the transformation mode is rigid affine transformation, the matrix obtained by the rigid affine transformation has 6 degrees of freedom, 4 degrees of freedom rotation and 2 degrees of freedom translation, the 6 degrees of freedom formula can be obtained according to the corresponding coordinates of the obtained at least 3 characteristic points in the first panoramic image and the second panoramic image respectively so as to obtain a 2 x 3-order matrix, the values of the 6 degrees of freedom formula in the matrix are solved by using the affine transformation function obtained in a computer vision and machine learning library so as to obtain the rotation relation and the translation relation from the first panoramic image to the second panoramic image, thereby obtaining the position mapping relation between the first panoramic image and the second panoramic image.
Step S40, searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relationship, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Referring to fig. 5, fig. 5 is a schematic diagram of generating a bottom image according to an embodiment of the image generation method of the present invention.
In the present embodiment, as shown in fig. 5, the position mapping relationship between the first panoramic image when the vehicle is at position a and the second panoramic image when the vehicle is at position B is calculated, the mapping chart of the first panoramic image when the vehicle is at the position A on the second panoramic image when the vehicle is at the position B can be obtained according to the position mapping relation, determining an image area overlapped with the vehicle bottom area of the second panoramic image on the first panoramic image, such as an overlapping area P in the figure 5, extracting an image of the overlapping area P from the first panoramic image, splicing the image of the overlapping area P to a corresponding position in the vehicle bottom area of the second panoramic image, generating a vehicle bottom image in the second panoramic image when the vehicle is positioned at the position B, displaying the vehicle bottom image on a screen of a panoramic all-around system, and storing the second panoramic image spliced with the vehicle bottom image as the first panoramic image of the next operation cycle. In the process of splicing the images of the overlapping area P to the corresponding positions in the vehicle bottom area of the second panoramic image, the edges of the image splicing overlap have obvious traces, namely, the boundaries of the image splicing have obvious splicing lines, so that the boundaries of the image splicing can be gradually fused by using a picture splicing and embedding algorithm in an OpenGL open graphic library, the splicing lines of the image splicing boundaries are eliminated, and the quality of the images displayed by the vehicle bottom image in the second panoramic image in the screen of the vehicle-mounted panoramic all-around system is improved.
Further, in an optional embodiment, the step S40 includes:
detecting whether the position mapping relation is effective or not;
if the position mapping relation is valid, searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
In this embodiment, when the position mapping relationship between the first panoramic image and the second panoramic image is obtained through calculation, it is further required to detect whether the obtained position mapping relationship is valid, and if the position mapping relationship obtained through analysis is valid, the overlapping area image corresponding to the vehicle bottom image in the second panoramic image can be searched from the first panoramic image by using the determined valid position mapping relationship, and the overlapping area image is spliced to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Further, in an optional embodiment, the step of detecting whether the position mapping relationship is valid includes:
determining a rotation angle and a comprehensive translation amount according to the position mapping relation;
detecting whether the rotation angle is in a preset rotation angle range or not and detecting whether the comprehensive translation amount is in a preset comprehensive translation amount range or not;
and if the rotation angle is within the preset rotation angle range and the comprehensive translation amount is within the preset comprehensive translation amount range, determining that the position mapping relationship is valid, otherwise, determining that the position mapping relationship is invalid.
In this embodiment, when the position mapping relationship between the first panoramic image and the second panoramic image is obtained through calculation, it is further required to detect whether the obtained position mapping relationship is valid, and it can be known according to the calculation of the position mapping relationship that whether the detected position mapping relationship is valid includes detecting whether the rotation angle and the comprehensive translation amount obtained according to the obtained position mapping relationship are within a preset range, and if the rotation angle is within the preset rotation angle range and the comprehensive translation amount is within the preset comprehensive translation amount range, it is determined that the position mapping relationship is valid, otherwise, it is determined that the position mapping relationship is invalid as long as a condition is not satisfied. For example, in general, the use of the vehicle-mounted panoramic looking-around system AVM needs to satisfy the following preconditions: the vehicle speed V is within the range of 0-20 km/h; AMV frame rate f is in the range of 20-25 fps; the wheel rotation angle is in the range of-38 degrees to +38 degrees; AVM top view image coordinate scale s: 80 pixs/m. If the AVM is applied to the vehicle type with the vehicle body wheelbase within the range of 2.5m-3.5m and the wheel base within the range of 1.6m-2.0m, the AVM can be obtained by the following formula 1:
Figure BDA0003240353470000121
the range of the rotation angle theta is calculated to be [ -6.63 degrees, +6.63 degrees °]And the obtained range is used as the range of the preset rotation angle of the vehicle type; by equation 2:
Figure BDA0003240353470000131
calculating to obtain the comprehensive translation amount
Figure BDA0003240353470000132
Has a range of [ -29.6, +29.6]And the obtained range is used as the range of the preset comprehensive translation amount of the vehicle type. At this time, the rotation angle theta and the translation amount t in the analyzed position mapping relationx、tyCombined resultant integrated translation
Figure BDA0003240353470000135
Comparing the rotation angle with the range of the preset rotation angle and the comprehensive translation amount, and if the rotation angle theta and the translation amount t in the analyzed position mapping relation are obtained through comparisonx、tyCombined resultant integrated translation
Figure BDA0003240353470000134
One of the two items is not in the range of the preset rotation angle and the comprehensive translation amount, namely the calculated position mapping relation can be judged to be invalid.
Further, in an optional embodiment, after the step of detecting whether the position mapping relationship is valid, the method includes:
if the position mapping relation is invalid, obtaining the valid position mapping relation determined last time;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the last determined effective position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
In this embodiment, if at least one of the rotation angle and the integrated translation amount obtained according to the detection result is not within the preset range, when it is determined that the obtained position mapping relationship is invalid, an effective position mapping relationship calculated in a previous operation cycle is obtained and determined, and then an overlapping area image corresponding to the vehicle bottom image in the second panoramic image is searched for from the first panoramic image according to the effective position mapping relationship determined at the previous time, and the overlapping area image is spliced to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
In this embodiment, a second panoramic image currently acquired and a first panoramic image acquired last time are acquired; obtaining the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively; calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image; and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image. The invention can solve the problems that the existing vehicle-mounted panoramic all-round viewing system can not monitor the vehicle bottom image and can not confirm the vehicle bottom condition, realizes the monitoring of the vehicle bottom condition according to the vehicle bottom image, meets the requirements of parking, has small requirements on calculated amount to a certain extent and has strong anti-interference performance.
In a third aspect, an embodiment of the present invention further provides an image generating apparatus.
Referring to fig. 6, a functional block diagram of an embodiment of an image generating apparatus is shown.
In this embodiment, the image generation apparatus includes:
a first obtaining module 10, configured to obtain a second panoramic image currently acquired and a first panoramic image acquired last time;
a second obtaining module 20, configured to obtain coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively;
the calculation module 30 is configured to calculate a position mapping relationship between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 feature points in the first panoramic image and the second panoramic image;
and the splicing module 40 is configured to search an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relationship, and splice the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Further, in an optional embodiment, in the acquiring the coordinates of the at least 3 feature points in the first panoramic image and the second panoramic image respectively, the second acquiring module 20 is configured to:
extracting a first point set in the region of interest of the first panoramic image based on an ORB algorithm, and extracting a second point set in the region of interest of the second panoramic image based on the ORB algorithm;
calculating a first descriptor matrix of the first point set, and calculating a second descriptor matrix of the second point set;
matching the first descriptor matrix and the second descriptor matrix, and deleting the matching point combination with the matching value not meeting the threshold value based on the matching result;
selecting at least three effective matching point combinations from the remaining matching point combinations, wherein each effective matching point combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching point combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Further, in an optional embodiment, in the acquiring the coordinates of the at least 3 feature points in the first panoramic image and the second panoramic image respectively, the second acquiring module 20 is further configured to:
selecting N points in an interesting area of a first panoramic image, and taking a preset pixel area with each point as a center as a first template to obtain N first templates;
selecting N points in an interesting area of a second panoramic image, and taking a preset pixel area with each point as a center as a second template to obtain N second templates;
matching the N first templates with the N second templates to obtain at least three effective matching template combinations, wherein each effective matching template combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching template combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
Further, in an optional embodiment, the searching for an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relationship, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image, where the image generating apparatus further includes a detecting module, configured to:
detecting whether the position mapping relation is effective or not;
if the position mapping relation is valid, searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
Further, in an optional embodiment, the detecting module is further configured to:
determining a rotation angle and a comprehensive translation amount according to the position mapping relation;
detecting whether the rotation angle is in a preset rotation angle range or not and detecting whether the comprehensive translation amount is in a preset comprehensive translation amount range or not;
and if the rotation angle is within the preset rotation angle range and the comprehensive translation amount is within the preset comprehensive translation amount range, determining that the position mapping relationship is valid, otherwise, determining that the position mapping relationship is invalid.
Further, in an optional embodiment, after the step of detecting whether the position mapping relationship is valid, the splicing module 40 is further configured to:
if the position mapping relation is invalid, obtaining the valid position mapping relation determined last time;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the last determined effective position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
The function implementation of each module in the image generation apparatus corresponds to each step in the embodiment of the image generation method, and the function and implementation process are not described in detail here.
In a fourth aspect, the embodiment of the present invention further provides a readable storage medium.
The readable storage medium of the present invention has stored thereon an image generation program, wherein the image generation program, when executed by a processor, implements the steps of the image generation method as described above.
The method implemented when the image generation program is executed may refer to each embodiment of the image generation method of the present invention, and details thereof are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium (e.g., ROM/RAM, magnetic disk, optical disk) as described above and includes instructions for causing a terminal device to execute the method according to the embodiments of the present invention.

Claims (10)

1. An image generation method, characterized by comprising:
acquiring a second panoramic image acquired currently and a first panoramic image acquired last time;
obtaining the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively;
calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
2. The image generation method according to claim 1, wherein the step of obtaining the coordinates of at least 3 feature points respectively corresponding to the first panoramic image and the second panoramic image comprises:
extracting a first point set in the region of interest of the first panoramic image based on an ORB algorithm, and extracting a second point set in the region of interest of the second panoramic image based on the ORB algorithm;
calculating a first descriptor matrix of the first point set, and calculating a second descriptor matrix of the second point set;
matching the first descriptor matrix and the second descriptor matrix, and deleting the matching point combination with the matching value not meeting the threshold value based on the matching result;
selecting at least three effective matching point combinations from the remaining matching point combinations, wherein each effective matching point combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching point combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
3. The image generation method according to claim 1, wherein the step of obtaining the coordinates of at least 3 feature points in the first panoramic image and the second panoramic image, respectively, further comprises:
selecting N points in an interesting area of a first panoramic image, and taking a preset pixel area with each point as a center as a first template to obtain N first templates;
selecting N points in an interesting area of a second panoramic image, and taking a preset pixel area with each point as a center as a second template to obtain N second templates;
matching the N first templates with the N second templates to obtain at least three effective matching template combinations, wherein each effective matching template combination comprises a first matching point in the first panoramic image and a second matching point in the second panoramic image, and the global coordinates of the first matching point and the second matching point are consistent;
and taking the local coordinates of the first matching point of each effective matching template combination in the first panoramic image and the local coordinates of the second matching point in the second panoramic image as the corresponding coordinates of a characteristic point in the first panoramic image and the second panoramic image respectively.
4. The image generation method of any one of claims 2 or 3, wherein the region of interest is near a vehicle underbody region, and the area of the region of interest is smaller than a preset area.
5. The image generation method according to claim 1, wherein the step of searching for an overlap area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relationship, and stitching the overlap area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image comprises:
detecting whether the position mapping relation is effective or not;
if the position mapping relation is valid, searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
6. The image generation method according to claim 5, wherein the step of detecting whether the position mapping relationship is valid includes:
determining a rotation angle and a comprehensive translation amount according to the position mapping relation;
detecting whether the rotation angle is in a preset rotation angle range or not and detecting whether the comprehensive translation amount is in a preset comprehensive translation amount range or not;
and if the rotation angle is within the preset rotation angle range and the comprehensive translation amount is within the preset comprehensive translation amount range, determining that the position mapping relationship is valid, otherwise, determining that the position mapping relationship is invalid.
7. The image generation method according to claim 6, characterized by, after the step of detecting whether the position mapping relationship is valid, comprising:
if the position mapping relation is invalid, obtaining the valid position mapping relation determined last time;
and searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the last determined effective position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
8. An image generation apparatus, characterized in that the image generation apparatus comprises:
the first acquisition module is used for acquiring a second panoramic image acquired currently and a first panoramic image acquired last time;
the second acquisition module is used for acquiring the corresponding coordinates of at least 3 feature points in the first panoramic image and the second panoramic image respectively;
the calculation module is used for calculating the position mapping relation between the first panoramic image and the second panoramic image according to the corresponding coordinates of the at least 3 characteristic points in the first panoramic image and the second panoramic image;
and the splicing module is used for searching an overlapping area image corresponding to the vehicle bottom image in the second panoramic image from the first panoramic image according to the position mapping relation, and splicing the overlapping area image to the vehicle bottom area in the second panoramic image to generate the vehicle bottom image in the second panoramic image.
9. An image generation device, characterized in that the image generation device comprises a processor, a memory, and an image generation program stored on the memory and executable by the processor, wherein the image generation program, when executed by the processor, implements the steps of the image generation method according to any one of claims 1 to 7.
10. A readable storage medium having an image generation program stored thereon, wherein the image generation program, when executed by a processor, implements the steps of the image generation method according to any one of claims 1 to 7.
CN202111016477.5A 2021-08-31 2021-08-31 Image generation method, device, equipment and readable storage medium Pending CN113850881A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111016477.5A CN113850881A (en) 2021-08-31 2021-08-31 Image generation method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111016477.5A CN113850881A (en) 2021-08-31 2021-08-31 Image generation method, device, equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN113850881A true CN113850881A (en) 2021-12-28

Family

ID=78976771

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111016477.5A Pending CN113850881A (en) 2021-08-31 2021-08-31 Image generation method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN113850881A (en)

Similar Documents

Publication Publication Date Title
CN113554698B (en) Vehicle pose information generation method and device, electronic equipment and storage medium
US11657319B2 (en) Information processing apparatus, system, information processing method, and non-transitory computer-readable storage medium for obtaining position and/or orientation information
CN111830953B (en) Vehicle self-positioning method, device and system
EP2851870B1 (en) Method for estimating ego motion of an object
CN109657638B (en) Obstacle positioning method and device and terminal
WO2022237272A1 (en) Road image marking method and device for lane line recognition
KR20150067680A (en) System and method for gesture recognition of vehicle
CN114913506A (en) 3D target detection method and device based on multi-view fusion
CN113029128A (en) Visual navigation method and related device, mobile terminal and storage medium
CN114897684A (en) Vehicle image splicing method and device, computer equipment and storage medium
CN110197104B (en) Distance measurement method and device based on vehicle
CN114120254A (en) Road information identification method, device and storage medium
CN112241963A (en) Lane line identification method and system based on vehicle-mounted video and electronic equipment
JP2003009141A (en) Processing device for image around vehicle and recording medium
CN116012805A (en) Object perception method, apparatus, computer device, storage medium, and program product
CN116543143A (en) Training method of target detection model, target detection method and device
JP2009077022A (en) Driving support system and vehicle
CN115565155A (en) Training method of neural network model, generation method of vehicle view and vehicle
CN113850881A (en) Image generation method, device, equipment and readable storage medium
CN114863096A (en) Semantic map construction and positioning method and device for indoor parking lot
US11615631B2 (en) Apparatus and method for providing top view image of parking space
CN113011212B (en) Image recognition method and device and vehicle
CN114120260A (en) Method and system for identifying travelable area, computer device, and storage medium
CN115861316B (en) Training method and device for pedestrian detection model and pedestrian detection method
CN117201708B (en) Unmanned aerial vehicle video stitching method, device, equipment and medium with position information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20220330

Address after: 430000 chuanggu startup zone b1336, taizihu cultural Digital Creative Industry Park, No. 18, Shenlong Avenue, Wuhan Economic and Technological Development Zone, Wuhan, Hubei Province

Applicant after: Yikatong (Hubei) Technology Co.,Ltd.

Address before: 430000 building B, building 7, Qidi Xiexin science and Innovation Park, South taizihu innovation Valley, Wuhan Economic and Technological Development Zone, Wuhan, Hubei Province

Applicant before: HUBEI ECARX TECHNOLOGY Co.,Ltd.