CN115797442A - Simulation image re-injection method of target position and related equipment - Google Patents
Simulation image re-injection method of target position and related equipment Download PDFInfo
- Publication number
- CN115797442A CN115797442A CN202211538620.1A CN202211538620A CN115797442A CN 115797442 A CN115797442 A CN 115797442A CN 202211538620 A CN202211538620 A CN 202211538620A CN 115797442 A CN115797442 A CN 115797442A
- Authority
- CN
- China
- Prior art keywords
- camera
- target
- parameter matrix
- image
- target position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 55
- 238000004088 simulation Methods 0.000 title claims abstract description 45
- 238000002347 injection Methods 0.000 title abstract description 13
- 239000007924 injection Substances 0.000 title abstract description 13
- 239000011159 matrix material Substances 0.000 claims abstract description 99
- 238000012545 processing Methods 0.000 claims abstract description 28
- 238000013519 translation Methods 0.000 claims description 45
- 238000004422 calculation algorithm Methods 0.000 claims description 30
- 238000012360 testing method Methods 0.000 claims description 4
- 230000000007 visual effect Effects 0.000 claims 4
- 238000001914 filtration Methods 0.000 claims 1
- 230000000875 corresponding effect Effects 0.000 description 55
- 230000008569 process Effects 0.000 description 20
- 238000004590 computer program Methods 0.000 description 12
- 238000012549 training Methods 0.000 description 12
- 238000010586 diagram Methods 0.000 description 10
- 230000009466 transformation Effects 0.000 description 10
- 239000000243 solution Substances 0.000 description 8
- 238000011156 evaluation Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000003062 neural network model Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 238000004891 communication Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 5
- 230000002596 correlated effect Effects 0.000 description 4
- 238000012795 verification Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000033001 locomotion Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000009012 visual motion Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
本申请涉及一种目标位置的仿真图像回注方法及其相关设备,其中方法包括:获取第一车辆的第一相机集的第一相机数据以及第二车辆的第二相机集的第二相机数据;根据目标相机的外部参数矩阵及各第二相机的外部参数矩阵确定与目标位置相邻的至少一路第二相机;基于至少一路第二相机各自对应的真实图像、深度图及目标相机对应的第一相机数据形成与目标位置相匹配的仿真图像;将仿真图像回注至第二车辆的数据处理单元。本申请通过利用第一车辆的第一相机数据形成第二车辆的目标位置处的仿真图像,能够进一步丰富第二相机数据,增强适配性,扩展应用广度。
The present application relates to a simulation image re-injection method of a target position and related equipment, wherein the method includes: acquiring first camera data of a first camera set of a first vehicle and second camera data of a second camera set of a second vehicle ; Determine at least one second camera adjacent to the target position according to the external parameter matrix of the target camera and the external parameter matrix of each second camera; A camera data forms a simulated image matching the target position; the simulated image is injected back into the data processing unit of the second vehicle. In the present application, by using the first camera data of the first vehicle to form a simulation image at the target position of the second vehicle, the second camera data can be further enriched, the adaptability can be enhanced, and the application range can be expanded.
Description
技术领域technical field
本申请涉及计算机技术领域,尤其涉及一种目标位置的仿真图像回注方法及其相关设备。The present application relates to the field of computer technology, in particular to a simulation image reinjection method of a target position and related equipment.
背景技术Background technique
目前无人驾驶技术蓬勃发展,其中最依赖的是现实情况下采集的数据信息。在算法开发过程中,利用现实情况下采集的数据回注(即注入)至控制器,能够验证算法的效果,提高算法开发及验证的效率。At present, unmanned driving technology is booming, which relies most on the data collected in real situations. In the process of algorithm development, the data collected in real situations can be injected back into the controller, which can verify the effect of the algorithm and improve the efficiency of algorithm development and verification.
具体来说,控制器中设置有算法(例如机器学习的神经网络)。在算法开发完成后,车上的相机可将采集到的视频数据注入到控制器。控制器的算法可对采集到的视频数据进行处理,得到输出结果,从而实现例如目标识别的多种功能。Specifically, an algorithm (such as a neural network for machine learning) is set in the controller. After the algorithm development is completed, the camera on the car can inject the collected video data into the controller. The algorithm of the controller can process the collected video data to obtain output results, so as to realize various functions such as target recognition.
在算法开发过程中,需要对算法(例如神经网络)进行训练、验证等工作,此时,需要向控制器的算法注入各种视频数据,视频数据的数据源可以是真实采集的视频数据,也可以是仿真的。但是,现有技术中在需要对新车型的车辆的控制器注入视频数据时,需要新车型车辆的相机实际采集到真实的视频数据,或者专门针对新车型的相机形成仿真的视频数据,而针对新车型车辆注入视频数据时,往往视频数据不多,并不能很好的达到训练和验证的效果。In the process of algorithm development, the algorithm (such as neural network) needs to be trained and verified. At this time, it is necessary to inject various video data into the algorithm of the controller. Can be simulated. However, in the prior art, when it is necessary to inject video data into the controller of a new model vehicle, the camera of the new model vehicle needs to actually collect real video data, or to form simulated video data specifically for the camera of the new model. When new models of vehicles are injected with video data, there is often not much video data, and the training and verification effects cannot be achieved well.
发明内容Contents of the invention
有鉴于此,本申请提出了一种目标位置的仿真图像回注方法及其相关设备,能够利用第一车辆的第一相机数据形成第二车辆的目标位置处的仿真图像,进一步丰富第二车辆的第二相机数据,增强第二相机数据的适配性,扩展第二相机数据的应用广度。In view of this, this application proposes a simulation image re-injection method of the target position and related equipment, which can use the first camera data of the first vehicle to form a simulation image at the target position of the second vehicle, and further enrich the second vehicle. The second camera data, enhance the adaptability of the second camera data, and expand the application range of the second camera data.
根据本申请的一方面,提供了一种目标位置的仿真图像回注方法,所述方法包括:获取第一车辆的第一相机集的第一相机数据以及第二车辆的第二相机集的第二相机数据,所述第一相机数据包括各第一相机的外部参数矩阵,所述第一相机集包括预设的目标相机,所述第二相机集包括多个第二相机,所述目标相机的位置与所述第二车辆上预设的目标位置相同,所述目标位置与各所述第二相机的位置不同,所述第二相机数据包括第二相机的外部参数矩阵与所述第二相机拍摄到的真实图像;根据所述目标相机的外部参数矩阵以及各所述第二相机的外部参数矩阵,确定与所述目标位置相邻的至少一路第二相机;基于所述至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像;将所述仿真图像回注至所述第二车辆的数据处理单元。According to one aspect of the present application, a simulation image re-injection method of a target position is provided, the method comprising: acquiring the first camera data of the first camera set of the first vehicle and the first camera data of the second camera set of the second vehicle Two camera data, the first camera data includes an external parameter matrix of each first camera, the first camera set includes a preset target camera, the second camera set includes a plurality of second cameras, and the target camera The position of the target is the same as the preset target position on the second vehicle, the target position is different from the position of each of the second cameras, and the second camera data includes the external parameter matrix of the second camera and the second The real image captured by the camera; according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrix of each of the second cameras, at least one second camera adjacent to the target position is determined; based on the at least one second camera The real image corresponding to each camera, the depth map, and the first camera data corresponding to the target camera form a simulation image matching the target position; inject the simulation image back into the data processing unit of the second vehicle .
根据本申请的又一方面,提供了一种目标相机的仿真图像回注装置,所述目标相机的仿真图像回注装置包括:相机数据获取模块,用于获取第一车辆的第一相机集的第一相机数据以及第二车辆的第二相机集的第二相机数据,所述第一相机数据包括各第一相机的外部参数矩阵,所述第一相机集包括预设的目标相机,所述第二相机集包括多个第二相机,所述目标相机的位置与所述第二车辆上预设的目标位置相同,所述目标位置与各所述第二相机的位置不同,所述第二相机数据包括第二相机的外部参数矩阵与所述第二相机拍摄到的真实图像;相机确定模块,用于根据所述目标相机的外部参数矩阵以及各所述第二相机的外部参数矩阵,确定与所述目标位置相邻的至少一路第二相机;图像形成模块,用于基于所述至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像;回注模块,用于将所述仿真图像回注至所述第二车辆的数据处理单元。According to still another aspect of the present application, there is provided a simulated image reinjection device of a target camera, the simulated image reinjection device of the target camera includes: a camera data acquisition module, configured to acquire the data of the first camera set of the first vehicle The first camera data and the second camera data of the second camera set of the second vehicle, the first camera data include the extrinsic parameter matrix of each first camera, the first camera set includes a preset target camera, the The second camera set includes a plurality of second cameras, the position of the target camera is the same as the preset target position on the second vehicle, the target position is different from the position of each of the second cameras, and the second The camera data includes the extrinsic parameter matrix of the second camera and the real image captured by the second camera; the camera determination module is configured to determine according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrices of each of the second cameras At least one second camera adjacent to the target position; an image forming module, configured to form an image corresponding to the first camera data corresponding to the at least one second camera based on the real image, the depth map, and the first camera data corresponding to the target camera. A simulation image matching the target position; a reinjection module, configured to reinject the simulation image into the data processing unit of the second vehicle.
通过获取第一车辆的第一相机集的第一相机数据以及第二车辆的第二相机集的第二相机数据,接着根据目标相机的外部参数矩阵以及各第二相机的外部参数矩阵确定与目标位置相邻的至少一路第二相机,进而基于至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与目标位置相匹配的仿真图像,最终将仿真图像回注至第二车辆的数据处理单元,根据本申请的各方面能够利用第一车辆的第一相机数据形成第二车辆的目标位置处的仿真图像,进一步丰富第二车辆的第二相机数据,增强第二相机数据的适配性,扩展第二相机数据的应用广度。By acquiring the first camera data of the first camera set of the first vehicle and the second camera data of the second camera set of the second vehicle, then according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrix of each second camera At least one second camera adjacent to the position, and then based on the real image corresponding to each of the at least one second camera, the depth map, and the first camera data corresponding to the target camera, a simulation image matching the target position is formed, and finally the simulated The image is re-injected to the data processing unit of the second vehicle. According to various aspects of the present application, the first camera data of the first vehicle can be used to form a simulation image at the target position of the second vehicle, and the second camera data of the second vehicle can be further enriched. , enhance the adaptability of the second camera data, and expand the application range of the second camera data.
附图说明Description of drawings
下面结合附图,通过对本申请的具体实施方式详细描述,将使本申请的技术方案及其它有益效果显而易见。The technical solutions and other beneficial effects of the present application will be apparent through the detailed description of the specific embodiments of the present application below in conjunction with the accompanying drawings.
图1示出本申请实施例的目标位置的仿真图像回注方法的流程图。FIG. 1 shows a flowchart of a method for reinjection of a simulated image at a target location according to an embodiment of the present application.
图2示出本申请实施例的目标位置以及目标相机的示意图。Fig. 2 shows a schematic diagram of a target position and a target camera according to an embodiment of the present application.
图3示出本申请实施例的目标相机的仿真图像回注装置的框图。Fig. 3 shows a block diagram of a device for reinjecting a simulated image of a target camera according to an embodiment of the present application.
图4示出本申请实施例的电子设备的结构示意图。Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式Detailed ways
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述。显然,所描述的实施例仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application will be clearly and completely described below in conjunction with the drawings in the embodiments of the present application. Apparently, the described embodiments are only some of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those skilled in the art without making creative efforts belong to the scope of protection of this application.
在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。In the description of the present application, it should be understood that the terms "center", "longitudinal", "transverse", "length", "width", "thickness", "upper", "lower", "front", " Orientation indicated by rear, left, right, vertical, horizontal, top, bottom, inside, outside, clockwise, counterclockwise, etc. The positional relationship is based on the orientation or positional relationship shown in the drawings, which is only for the convenience of describing the application and simplifying the description, and does not indicate or imply that the referred device or element must have a specific orientation, be constructed and operated in a specific orientation, Therefore, it should not be construed as limiting the application. In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be interpreted as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of said features. In the description of the present application, "plurality" means two or more, unless otherwise specifically defined.
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接或可以相互通讯;可以是直接连接,也可以通过中间媒介间接连接,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。In the description of this application, it should be noted that unless otherwise specified and limited, the terms "installation", "connection", and "connection" should be understood in a broad sense, for example, it can be a fixed connection or a detachable connection. Connected, or integrally connected; may be mechanically connected, may also be electrically connected or may communicate with each other; may be directly connected, or indirectly connected through an intermediary, may be the internal communication of two components or the interaction of two components relation. Those of ordinary skill in the art can understand the specific meanings of the above terms in this application according to specific situations.
下文的公开提供了许多不同的实施方式或例子用来实现本申请的不同结构。为了简化本申请的公开,下文中对特定例子的部件和设置进行描述。当然,它们仅为示例,并且目的不在于限制本申请。此外,本申请可以在不同例子中重复参考数字和/或参考字母,这种重复是为了简化和清楚的目的,其本身不指示所讨论各种实施方式和/或设置之间的关系。此外,本申请提供了的各种特定的工艺和材料的例子,但是本领域普通技术人员可以意识到其他工艺的应用和/或其他材料的使用。在一些实例中,对于本领域技术人员熟知的方法、手段、元件和电路未作详细描述,以便于凸显本申请的主旨。The following disclosure provides many different implementations or examples for implementing different structures of the present application. To simplify the disclosure of the present application, components and arrangements of specific examples are described below. Of course, they are examples only and are not intended to limit the application. Furthermore, the present application may repeat reference numerals and/or reference letters in various instances, such repetition is for simplicity and clarity and does not in itself indicate a relationship between the various embodiments and/or arrangements discussed. In addition, various specific process and material examples are provided herein, but one of ordinary skill in the art may recognize the use of other processes and/or the use of other materials. In some instances, methods, means, components and circuits well known to those skilled in the art have not been described in detail in order to highlight the gist of the present application.
图1示出本申请实施例的目标位置的仿真图像回注方法的流程图。如图1所示,本申请的目标位置的仿真图像回注方法可包括:FIG. 1 shows a flowchart of a method for reinjection of a simulated image at a target location according to an embodiment of the present application. As shown in Figure 1, the simulated image reinjection method of the target position of the present application may include:
步骤S1:获取第一车辆的第一相机集的第一相机数据以及第二车辆的第二相机集的第二相机数据,所述第一相机数据包括各第一相机的外部参数矩阵,所述第一相机集包括预设的目标相机,所述第二相机集包括多个第二相机,所述目标相机的位置与所述第二车辆上预设的目标位置相同,所述目标位置与各所述第二相机的位置不同,所述第二相机数据包括第二相机的外部参数矩阵与所述第二相机拍摄到的真实图像;Step S1: Obtain the first camera data of the first camera set of the first vehicle and the second camera data of the second camera set of the second vehicle, the first camera data includes the external parameter matrix of each first camera, the The first camera set includes a preset target camera, the second camera set includes a plurality of second cameras, the position of the target camera is the same as the preset target position on the second vehicle, and the target position is the same as each The positions of the second cameras are different, and the second camera data includes an extrinsic parameter matrix of the second camera and a real image captured by the second camera;
其中,所述多个第一相机可安装在第一车辆上,所述第一车辆可以有一辆,也可以有多辆。相应的,所述多个第二相机可安装在第二车辆上,所述第二车辆也可以有一辆,也可以有多辆。可以理解,对于第一车辆以及第二车辆的数量,本申请并不限定。Wherein, the plurality of first cameras may be installed on the first vehicle, and there may be one or more than one first vehicle. Correspondingly, the plurality of second cameras may be installed on the second vehicle, and there may be one or more than one second vehicle. It can be understood that the present application does not limit the quantity of the first vehicle and the quantity of the second vehicle.
其中,所述第一相机数据可以是一个数据集合,所述第一相机数据可包括各所述第一相机的外部参数矩阵以及各所述第一相机的内部参数矩阵等参数。与所述第一相机数据类似,所述第二相机数据可以是一个数据集合,所述第二相机数据可包括各所述第二相机的外部参数矩阵以及各所述第二相机的内部参数矩阵等参数。需要说明的是,所述第一相机数据还可包括各所述第一相机拍摄到的原始图像,所述第二相机数据还可包括各所述第二相机拍摄到的真实图像。Wherein, the first camera data may be a data set, and the first camera data may include parameters such as an extrinsic parameter matrix of each of the first cameras and an internal parameter matrix of each of the first cameras. Similar to the first camera data, the second camera data may be a data set, and the second camera data may include an extrinsic parameter matrix of each second camera and an internal parameter matrix of each second camera and other parameters. It should be noted that the first camera data may also include original images captured by each of the first cameras, and the second camera data may also include real images captured by each of the second cameras.
示例性的,第一车辆可以是旧车型的车辆,第二车辆可以是新车型的车辆,第一相机数据以及第二相机数据均可以存入对应的数据库中。Exemplarily, the first vehicle may be an old model vehicle, the second vehicle may be a new model vehicle, and both the first camera data and the second camera data may be stored in a corresponding database.
其中,所述目标相机的外部参数矩阵以及所述目标相机的内部参数矩阵可以同时获取,也可以按顺序获取。所述外部参数矩阵可包括从世界坐标系转换至相机坐标系的旋转参数以及平移参数,用于将世界坐标系的坐标点转换至相机坐标系的坐标点;所述内部参数矩阵可包括相机自身的参数,例如相机焦距等,所述内部参数矩阵可用于将相机坐标系的坐标点转换至像素坐标系的坐标点。内部参数矩阵通常是固定的,而外部参数矩阵可与相机的位置、朝向等参数相关。Wherein, the external parameter matrix of the target camera and the internal parameter matrix of the target camera may be obtained simultaneously or sequentially. The external parameter matrix may include rotation parameters and translation parameters converted from the world coordinate system to the camera coordinate system, for converting the coordinate points of the world coordinate system to the coordinate points of the camera coordinate system; the internal parameter matrix may include the camera itself parameters, such as the focal length of the camera, etc., the internal parameter matrix can be used to convert the coordinate points of the camera coordinate system to the coordinate points of the pixel coordinate system. The internal parameter matrix is usually fixed, while the external parameter matrix can be related to the camera's position, orientation and other parameters.
图2示出本申请实施例的目标位置以及目标相机的示意图。Fig. 2 shows a schematic diagram of a target position and a target camera according to an embodiment of the present application.
如图2所示,第一车辆上可安装有多个第一相机,第二车辆上可安装有多个第二相机。其中,在多个第一相机中,可以将多个第一相机中的其中一个相机作为目标相机,该目标相机在第一车辆上存在,而在第二车辆上与该目标相机相同的位置并不存在对应的相机。As shown in FIG. 2 , multiple first cameras may be installed on the first vehicle, and multiple second cameras may be installed on the second vehicle. Among the multiple first cameras, one of the multiple first cameras can be used as the target camera, the target camera exists on the first vehicle, and the same position as the target camera on the second vehicle and There is no corresponding camera.
具体来说,所述目标相机的位置与所述第二车辆上预设的目标位置相同,所述目标位置与各所述第二相机的位置均不相同。本申请实施例中,在第二车辆中并不存在与第一车辆的所述目标相机位置相同的相机。因此,可以利用与所述目标位置相邻的若干个第二相机来模拟在所述目标位置安装第二相机的情况下可能拍摄的仿真图像。Specifically, the position of the target camera is the same as a preset target position on the second vehicle, and the target position is different from the positions of the second cameras. In the embodiment of the present application, there is no camera at the same position as the target camera of the first vehicle in the second vehicle. Therefore, several second cameras adjacent to the target position can be used to simulate the simulated images that may be taken when the second camera is installed at the target position.
由于第一车辆上安装的第一相机的位置与第二车辆上安装的第二相机的位置不同,或者第一车辆上安装的第一相机的型号与第二车辆上安装的第二相机的型号不同,导致第一车辆上安装的第一相机的视角与第二车辆上安装的第二相机的视角有所差异,因此第一车辆的相机数据不能直接用于第二车辆,需要采用本申请的方法将第一车辆的相机数据进行变换后用于第二车辆。而在所述第二相机的位置与所述第一相机的位置相同的情况下,可以将该第一相机对应的第一相机数据直接移植至与该第一相机的位置相同的第二相机。Because the position of the first camera installed on the first vehicle is different from that of the second camera installed on the second vehicle, or the model of the first camera installed on the first vehicle is different from the model of the second camera installed on the second vehicle different, resulting in a difference between the angle of view of the first camera installed on the first vehicle and the angle of view of the second camera installed on the second vehicle, so the camera data of the first vehicle cannot be directly used for the second vehicle, and the method of this application needs to be adopted A method transforms camera data from a first vehicle for a second vehicle. In the case where the position of the second camera is the same as that of the first camera, the first camera data corresponding to the first camera may be directly transplanted to the second camera at the same position as the first camera.
步骤S2:根据所述目标相机的外部参数矩阵以及各所述第二相机的外部参数矩阵,确定与所述目标位置相邻的至少一路第二相机;Step S2: Determine at least one second camera adjacent to the target position according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrix of each of the second cameras;
本申请实施例中,由于第二车辆中并不存在与所述目标相机位置相同的相机,因此需要利用第二车辆的目标位置处附近的至少一路第二相机来模拟目标位置的仿真图像,以便对假设在目标位置处安装相机后所拍摄到的图像。值得注意的是,由于外部参数矩阵反映了相机的视角信息,在确定与所述目标位置相邻的至少一路第二相机的过程中,不需要内部参数矩阵的参与,从而提升所述至少一路第二相机的选定效率。In the embodiment of the present application, since there is no camera at the same position as the target camera in the second vehicle, it is necessary to use at least one second camera near the target position of the second vehicle to simulate the simulation image of the target position, so that For the image captured after assuming a camera is installed at the target location. It is worth noting that since the external parameter matrix reflects the angle of view information of the camera, the process of determining at least one second camera adjacent to the target position does not require the participation of the internal parameter matrix, thereby improving the at least one second camera. Selected efficiency of the second camera.
进一步地,根据所述目标相机的外部参数矩阵以及各所述第二相机的外部参数矩阵,确定与所述目标位置相邻的至少一路第二相机,可包括:Further, according to the extrinsic parameter matrix of the target camera and the extrinsic parameter matrix of each of the second cameras, determining at least one second camera adjacent to the target position may include:
步骤S21:通过所述目标相机的外部参数矩阵,提取与所述目标相机对应的第一平移参数、第二平移参数以及第三平移参数;Step S21: extracting a first translation parameter, a second translation parameter and a third translation parameter corresponding to the target camera through the external parameter matrix of the target camera;
例如,所述第一车辆上安装有C1、C2...CN共N个第一相机,N为正整数。对于每个第一相机而言,均可以获取对应第一相机的外部参数矩阵。示例性的,对于目标相机C1,该目标相机对应的外部参数矩阵中的第一平移参数、第二平移参数以及第三平移参数可分别表示为tx(C1)、ty(C1)、tz(C1),表征目标相机拍摄的目标物体从世界坐标系变换至相机坐标系的平移参数。For example, N first cameras C1, C2...C N are installed on the first vehicle, where N is a positive integer. For each first camera, an extrinsic parameter matrix corresponding to the first camera can be obtained. Exemplarily, for the target camera C1, the first translation parameter, the second translation parameter and the third translation parameter in the external parameter matrix corresponding to the target camera can be expressed as t x(C1) , ty(C1) , t z(C1) represents the translation parameter of the target object captured by the target camera from the world coordinate system to the camera coordinate system.
步骤S22:通过各所述第二相机的外部参数矩阵,分别提取各所述第二相机对应的第四平移参数、第五平移参数以及第六平移参数;Step S22: extracting the fourth translation parameter, the fifth translation parameter and the sixth translation parameter corresponding to each second camera through the external parameter matrix of each second camera;
其中,所述第四平移参数、第五平移参数以及第六平移参数可以分别是各所述第二相机的外部参数矩阵中的tx、ty、tz,表征任一个第二相机拍摄的目标物体从世界坐标系变换至相机坐标系的平移参数。Wherein, the fourth translation parameter, the fifth translation parameter and the sixth translation parameter may respectively be t x , ty , t z in the extrinsic parameter matrix of each of the second cameras, representing the The translation parameter of the target object from the world coordinate system to the camera coordinate system.
步骤S23:基于各所述第二相机对应的第四平移参数、第五平移参数以及第六平移参数以及与所述目标相机对应的第一平移参数、第二平移参数以及第三平移参数,分别计算各所述第二相机与所述目标相机之间的欧式距离,得到与所述多个第二相机对应的多个相机距离;Step S23: Based on the fourth translation parameter, the fifth translation parameter and the sixth translation parameter corresponding to each of the second cameras and the first translation parameter, the second translation parameter and the third translation parameter corresponding to the target camera, respectively calculating the Euclidean distance between each of the second cameras and the target camera to obtain a plurality of camera distances corresponding to the plurality of second cameras;
示例性的,对于第一车辆上的目标相机C1以及第二车辆上的任一个第二相机,可将这两个相机对应的平移参数进行平方根运算,得到该第二相机对应的欧式距离。具体的,可将第一平移参数与第四平移参数相减后平方,加上第二平移参数与第五平移参数相减后平方,再加上第三平移参数与第六平移参数相减后平方,总的加和结果开根号即可得到与该第二相机对应的相机距离。与此类似,可得到其他各第二相机各自对应的相机距离。Exemplarily, for the target camera C1 on the first vehicle and any second camera on the second vehicle, the translation parameters corresponding to these two cameras can be subjected to square root calculation to obtain the Euclidean distance corresponding to the second camera. Specifically, the first translation parameter can be subtracted from the fourth translation parameter and squared, the second translation parameter can be subtracted from the fifth translation parameter, and the second translation parameter can be subtracted from the fifth translation parameter, and the third translation parameter can be subtracted from the sixth translation parameter. square, and the root sign of the total sum result can be used to obtain the camera distance corresponding to the second camera. Similarly, the camera distances corresponding to the other second cameras can be obtained.
步骤S24:选取低于预设相机距离阈值的至少一路第二相机作为与所述目标位置相邻的至少一路第二相机。Step S24: Selecting at least one second camera that is lower than a preset camera distance threshold as at least one second camera adjacent to the target position.
其中,所述相机距离阈值可以根据需要进行设置。所述至少一路第二相机可以为n路第二相机,n为整数。可选的,可以设置n为2或3,选取与所述目标相机位置相近的2路第二相机或3路第二相机作为参照。Wherein, the camera distance threshold can be set as required. The at least one second camera may be n second cameras, where n is an integer. Optionally, n may be set to 2 or 3, and a 2-way second camera or a 3-way second camera close to the target camera is selected as a reference.
通过利用所述目标相机的外部参数矩阵以及各所述第二相机的外部参数矩阵来确定与所述目标位置相近的至少一路第二相机,本申请实施例能够提升不同车辆的相机数据迁移的准确性,同时减少计算的数据量。By using the external parameter matrix of the target camera and the external parameter matrix of each of the second cameras to determine at least one second camera that is close to the target position, the embodiment of the present application can improve the accuracy of camera data migration of different vehicles. performance while reducing the amount of calculated data.
步骤S3:基于所述至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像;Step S3: Based on the real image corresponding to each of the at least one second camera, the depth map, and the first camera data corresponding to the target camera, a simulation image matching the target position is formed;
进一步地,在基于所述至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像之前,所述目标位置的仿真图像回注方法包括:Further, before forming a simulated image matching the target position based on the real image corresponding to each of the at least one second camera, the depth map, and the first camera data corresponding to the target camera, the target position Simulation image re-injection methods include:
步骤S301:基于SFM算法对所述至少一路第二相机拍摄的真实图像进行处理,生成与所述至少一路第二相机各自对应的深度图。Step S301: Process the real images captured by the at least one second camera based on the SFM algorithm, and generate depth maps corresponding to the at least one second camera.
其中,所述第二相机数据可以是一个数据集,包括各路第二相机拍摄的多幅真实图像。在步骤S301中,可以先通过第二相机的编号在所述第二相机数据中寻找到所述至少一路第二相机,然后直接提取所述至少一路第二相机拍摄的多幅真实图像。Wherein, the second camera data may be a data set, including a plurality of real images captured by each second camera. In step S301, the at least one second camera may be found in the second camera data through the number of the second camera, and then multiple real images captured by the at least one second camera are directly extracted.
其中,深度图可用于表征目标相机拍摄的目标物体与该目标相机的距离,即目标深度信息。例如,对于所述第二车辆,该第二车辆的左前方安装有一目标相机,该目标相机拍摄的红绿灯距离该相机的直线距离为10m,该目标相机拍摄的行人距离该目标相机的直线距离为15m,此时红绿灯和行人均可作为所述目标物体。所述目标物体可以有一个或多个,所述目标物体与所述第二车辆相机的距离可以基于目标物体的几何中心以及相机的光心来具体衡量。Wherein, the depth map can be used to represent the distance between the target object captured by the target camera and the target camera, that is, the target depth information. For example, for the second vehicle, a target camera is installed on the left front of the second vehicle, the traffic lights photographed by the target camera are 10 meters away from the straight-line distance of the camera, and the straight-line distance of the pedestrians photographed by the target camera from the target camera is 15m, at this time traffic lights and pedestrians can be used as the target objects. There may be one or more target objects, and the distance between the target object and the second vehicle camera may be specifically measured based on the geometric center of the target object and the optical center of the camera.
在实际应用中,目标相机拍摄的目标物体的坐标可使用世界坐标系标定。世界坐标系的原点与目标相机的具体位置无关,可根据实际需要进行选择。通常情况下,目标物体在世界坐标系中的坐标并不能直接投射到二维平面的图像,需要进行进一步转换。例如,可利用外部参数矩阵将世界坐标系中的坐标转换至相机坐标系中,然后利用内部参数矩阵将相机坐标系中对应的坐标转换至像素坐标系中。示例性的,相机坐标系的原点可以为相机的光心,像素坐标系的原点可以位于所拍摄图像的左上角。In practical applications, the coordinates of the target object captured by the target camera can be calibrated using the world coordinate system. The origin of the world coordinate system has nothing to do with the specific position of the target camera, and can be selected according to actual needs. Usually, the coordinates of the target object in the world coordinate system cannot be directly projected to the image of the two-dimensional plane, and further transformation is required. For example, the coordinates in the world coordinate system can be transformed into the camera coordinate system by using the external parameter matrix, and then the corresponding coordinates in the camera coordinate system can be transformed into the pixel coordinate system by using the internal parameter matrix. Exemplarily, the origin of the camera coordinate system may be the optical center of the camera, and the origin of the pixel coordinate system may be located at the upper left corner of the captured image.
其中,所述深度信息可通过雷达探测得到,也可通过双目视觉的方式得到。可以理解,所述深度信息的获取有多种方式,本申请并不限定。Wherein, the depth information can be obtained through radar detection, and can also be obtained through binocular vision. It can be understood that there are many ways to acquire the depth information, which are not limited in this application.
其中,SFM算法也称运动结构重建(Structure From Motion)算法,能够从一系列包含视觉运动信息的多幅二维图像序列中重建出三维结构。Among them, the SFM algorithm is also called the Structure From Motion algorithm, which can reconstruct a three-dimensional structure from a series of multiple two-dimensional image sequences containing visual motion information.
示例性的,基于SFM算法,可以选取所述多幅真实图像中相邻两幅真实图像两两进行计算,接着匹配不同真实图像中的特征点,然后计算对应的基础矩阵以及本征矩阵,进而根据所述基础矩阵以及本征矩阵重建出所述至少一路第二相机各自对应的深度图。所述至少一路第二相机中的每路第二相机均对应一幅深度图。对于所述至少一路第二相机中的任意一路第二相机而言,该路第二相机对应的深度图可包括目标物体与该路第二相机之间的距离。Exemplarily, based on the SFM algorithm, it is possible to select two adjacent real images in the multiple real images for calculation, then match the feature points in different real images, and then calculate the corresponding fundamental matrix and intrinsic matrix, and then Depth maps corresponding to the at least one second camera are reconstructed according to the fundamental matrix and the intrinsic matrix. Each second camera in the at least one second camera corresponds to a depth map. For any second camera in the at least one second camera, the depth map corresponding to the second camera may include the distance between the target object and the second camera.
进一步地,基于所述至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像,包括:Further, based on the real image corresponding to each of the at least one second camera, the depth map, and the first camera data corresponding to the target camera, forming a simulation image matching the target position includes:
步骤S31:获取所述至少一路第二相机中各所述第二相机的内部参数矩阵;Step S31: Obtain the internal parameter matrix of each second camera in the at least one second camera;
其中,在步骤S31中,可以从所述第二相机数据中提取所述至少一路第二相机中各所述第二相机的内部参数矩阵。Wherein, in step S31, the internal parameter matrix of each second camera in the at least one second camera may be extracted from the second camera data.
步骤S32:基于所述至少一路第二相机中各所述第二相机的外部参数矩阵、各所述第二相机的内部参数矩阵、所述至少一路第二相机各自对应的深度图与真实图像,将所述真实图像的像素点从像素坐标系投射至世界坐标系,得到世界坐标系下的多个第一像素点;Step S32: Based on the external parameter matrix of each of the second cameras in the at least one second camera, the internal parameter matrix of each of the second cameras, and the corresponding depth map and real image of the at least one second camera, Projecting the pixels of the real image from the pixel coordinate system to the world coordinate system to obtain a plurality of first pixel points in the world coordinate system;
其中,所述目标物体可包括多个特征点,每个特征点可与基于该目标物体拍摄的图像上的像素点相对应。每个特征点在所述世界坐标系下具有第一像素点。在实际应用中,可将各所述深度图对应的特征点平移至世界坐标系。Wherein, the target object may include a plurality of feature points, and each feature point may correspond to a pixel point on an image captured based on the target object. Each feature point has a first pixel point in the world coordinate system. In practical applications, the feature points corresponding to each of the depth maps may be translated to the world coordinate system.
由于第一相机与目标相机视角的差异,且像素坐标系的像素点无法直接变换到世界坐标系的坐标点,因此将各所述深度图平移至世界坐标系,以便于通过各所述深度图实现将所述多幅真实图像的像素点从像素坐标系投射至世界坐标系的变换过程。换句话说,在没有深度图的情况下,所述多幅真实图像的像素点从像素坐标系投射至世界坐标系的变换无法实现;而在结合所述至少一路第二相机中各所述第二相机的外部参数矩阵、各所述第二相机的内部参数矩阵以及所述至少一路第二相机各自对应的深度图的情况下,所述多幅真实图像的像素点可以从像素坐标系投射至世界坐标系。Due to the difference in the angle of view between the first camera and the target camera, and the pixel points in the pixel coordinate system cannot be directly transformed into coordinate points in the world coordinate system, each of the depth maps is translated to the world coordinate system, so as to pass through each of the depth maps. Realize the transformation process of projecting the pixel points of the plurality of real images from the pixel coordinate system to the world coordinate system. In other words, without a depth map, the transformation of the pixel points of the multiple real images projected from the pixel coordinate system to the world coordinate system cannot be realized; In the case of the external parameter matrix of the two cameras, the internal parameter matrix of each of the second cameras, and the corresponding depth maps of the at least one second camera, the pixels of the multiple real images can be projected from the pixel coordinate system to world coordinate system.
步骤S33:基于所述多个第一像素点以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像。Step S33: Based on the plurality of first pixels and first camera data corresponding to the target camera, a simulation image matching the target position is formed.
进一步地,基于所述多个第一像素点以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像,包括:Further, based on the plurality of first pixels and the first camera data corresponding to the target camera, forming a simulation image matching the target position includes:
步骤S331:利用所述目标相机的外部参数矩阵生成所述目标位置的外部参数矩阵;Step S331: using the extrinsic parameter matrix of the target camera to generate the extrinsic parameter matrix of the target position;
在一个示例中,可以将所述目标相机的外部参数矩阵直接作为所述目标位置的外部参数矩阵。In an example, the extrinsic parameter matrix of the target camera may be directly used as the extrinsic parameter matrix of the target position.
步骤S332:根据所述目标位置的外部参数矩阵,将部分或全部第一像素点投射至所述目标相机对应的相机坐标系,得到相机坐标系下的多个第二像素点;Step S332: According to the external parameter matrix of the target position, project some or all of the first pixel points to the camera coordinate system corresponding to the target camera to obtain a plurality of second pixel points in the camera coordinate system;
在一个示例中,所述第一像素点与所述第二像素点之间的转换关系可用公式(1)表示如下:In an example, the conversion relationship between the first pixel point and the second pixel point can be expressed by formula (1) as follows:
其中,X、Y、Z分别表示目标物体在世界坐标系下的第一像素点;Xc、Yc、Zc分别表示目标物体在相机坐标系下的第二像素点;所述外部参数矩阵中的r11-r33共9个参数表示目标物体从世界坐标系变换至相机坐标系的旋转参数;所述外部参数矩阵中的tx、ty、tz分别表示目标物体从世界坐标系变换至相机坐标系的平移参数。Among them, X, Y, and Z respectively represent the first pixel of the target object in the world coordinate system; X c , Y c , and Z c represent the second pixel of the target object in the camera coordinate system; the external parameter matrix The r 11 -r 33 in the total of 9 parameters represent the rotation parameters of the target object from the world coordinate system to the camera coordinate system; t x , ty , t z in the external parameter matrix respectively represent the target object from the world coordinate system The translation parameter to transform to the camera coordinate system.
由于世界坐标系原点与相机坐标系原点不重合,因此当需要世界坐标系的点投射至像面上,需要先利用外部参数矩阵将世界坐标系转换至相机坐标系,该外部参数矩阵表征了转换过程中的的旋转和平移。当然,由于目标物体可包括多个特征点,因此公式(1)的实际处理对象可以是所述目标物体的特征点。从像素坐标系的角度来看,所述特征点可以是拍摄的图像中的各个像素点。Since the origin of the world coordinate system does not coincide with the origin of the camera coordinate system, when a point in the world coordinate system needs to be projected onto the image plane, it is first necessary to use the external parameter matrix to convert the world coordinate system to the camera coordinate system. The external parameter matrix represents the conversion Rotation and translation in the process. Of course, since the target object may include multiple feature points, the actual processing object of formula (1) may be the feature points of the target object. From the perspective of the pixel coordinate system, the feature points may be individual pixel points in the captured image.
步骤S333:基于所述多个第二像素点以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像。Step S333: Based on the plurality of second pixel points and the first camera data corresponding to the target camera, form a simulation image matching the target position.
其中,公式(1)的变换过程可看作正变换过程的一部分,步骤S32的过程可称为逆变换过程。Wherein, the transformation process of formula (1) can be regarded as a part of the forward transformation process, and the process of step S32 can be called an inverse transformation process.
需要说明的是,在本申请中,主要采用世界坐标系、相机坐标系以及像素坐标系这三种坐标系对目标物体以及目标相机的位置进行标定。本领域人员可以理解的是,坐标变换存在其他可能的变形,对于各坐标系之间的变形转换,本申请并不限定。It should be noted that in this application, three coordinate systems, namely, the world coordinate system, the camera coordinate system and the pixel coordinate system, are mainly used to calibrate the positions of the target object and the target camera. Those skilled in the art can understand that there are other possible deformations in the coordinate transformation, and the present application does not limit the deformation transformation between the various coordinate systems.
进一步地,根据所述目标位置的外部参数矩阵,将部分或全部第一像素点投射至所述目标相机对应的相机坐标系,得到相机坐标系下的多个第二像素点,包括:Further, according to the external parameter matrix of the target position, project part or all of the first pixel points to the camera coordinate system corresponding to the target camera to obtain a plurality of second pixel points in the camera coordinate system, including:
步骤S3321:获取所述目标相机的视角范围;Step S3321: Obtain the viewing angle range of the target camera;
其中,所述目标相机的视角范围可以是所述目标相机所能观测的最大视角。例如,在目标相机安装在车辆左前方的情况下该目标相机所能观测的最大视角可以是120度,在目标相机安装在车辆右前方的情况下该目标相机所能观测的最大视角可以是180度。不同位置的目标相机的视角范围可以是不同的。换句话说,所述目标相机所能观测的最大视角可以与目标相机本身的坐标相关。Wherein, the viewing angle range of the target camera may be a maximum viewing angle that the target camera can observe. For example, when the target camera is installed on the left front of the vehicle, the maximum viewing angle that the target camera can observe can be 120 degrees, and when the target camera is installed on the right front of the vehicle, the maximum viewing angle that the target camera can observe can be 180 degrees. Spend. The viewing angle ranges of the target cameras at different positions may be different. In other words, the maximum viewing angle that the target camera can observe may be related to the coordinates of the target camera itself.
步骤S3322:判断所述多个第一像素点中各第一像素点是否位于所述视角范围内;Step S3322: Determine whether each first pixel in the plurality of first pixels is within the range of the viewing angle;
步骤S3323:若所述第一像素点位于所述视角范围内,将该第一像素点投射至所述目标位置对应的相机坐标系;若所述第一像素点位于所述视角范围外,过滤该第一像素点。Step S3323: If the first pixel point is within the viewing angle range, project the first pixel point to the camera coordinate system corresponding to the target position; if the first pixel point is outside the viewing angle range, filter the first pixel.
需要说明的是,所述目标相机的视角范围可以与相机自身的安装位置以及相机的性能等因素有关,而目标相机的视角范围是有限的。例如,目标相机的视角范围可以拍摄水平方向120度以及垂直方向120度的范围。因此,在投射各第一像素点时,需要判断所述多个第一像素点中各第一像素点是否位于所述视角范围内。It should be noted that the viewing angle range of the target camera may be related to factors such as the installation position of the camera itself and the performance of the camera, but the viewing angle range of the target camera is limited. For example, the viewing angle range of the target camera may capture a range of 120 degrees in the horizontal direction and 120 degrees in the vertical direction. Therefore, when projecting each first pixel point, it is necessary to determine whether each first pixel point among the plurality of first pixel points is located within the viewing angle range.
其中,所述目标物体可以根据需要进行选择。所述第一相机拍摄的第一图像对应的目标物体与所述第二相机拍摄的真实图像对应的目标物体相同。所述第一相机以及所述第二相机均可以从不同的视角进行拍摄同一个目标物体。Wherein, the target object can be selected according to needs. The target object corresponding to the first image captured by the first camera is the same as the target object corresponding to the real image captured by the second camera. Both the first camera and the second camera can shoot the same target object from different angles of view.
其中,在所述第一像素点位于所述视角范围内的情况下,则可以直接将该第一像素点投射至所述目标位置对应的相机坐标系;在所述第一像素点位于所述视角范围外的情况下,可以将所述视角范围外的第一像素点过滤掉。通过判断所述多个第一像素点中各第一像素点是否位于所述视角范围内,本申请实施例能够减少坐标投射过程中的坐标数据量,提升仿真图形的生成效率。Wherein, when the first pixel point is located within the viewing angle range, the first pixel point can be directly projected to the camera coordinate system corresponding to the target position; If the viewing angle is outside the range, the first pixel outside the viewing angle may be filtered out. By judging whether each first pixel of the plurality of first pixels is located within the viewing angle range, the embodiment of the present application can reduce the amount of coordinate data in the coordinate projection process and improve the generation efficiency of the simulation graphics.
进一步地,所述第一相机数据还包括各所述第一相机的内部参数矩阵,基于所述多个第二像素点以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像,包括:Further, the first camera data also includes an internal parameter matrix of each of the first cameras, and based on the plurality of second pixels and the first camera data corresponding to the target camera, a matrix corresponding to the target position is formed. Matching emulated images, including:
步骤S3331:获取所述目标相机的内部参数矩阵;Step S3331: Obtain the internal parameter matrix of the target camera;
其中,在步骤S3331中,可以从所述第一相机数据中提取所述目标相机的内部参数矩阵。Wherein, in step S3331, the internal parameter matrix of the target camera may be extracted from the first camera data.
步骤S3332:利用所述目标相机的内部参数矩阵生成所述目标位置的内部参数矩阵;Step S3332: Using the internal parameter matrix of the target camera to generate the internal parameter matrix of the target position;
在一个示例中,可以将所述目标相机的内部参数矩阵直接作为所述目标位置的内部参数矩阵。In an example, the internal parameter matrix of the target camera may be directly used as the internal parameter matrix of the target position.
步骤S3333:根据所述目标位置的内部参数矩阵,将所述多个第二像素点中各第二像素点投射至所述目标位置对应的像素坐标系,得到像素坐标系下的多个第三像素点;Step S3333: According to the internal parameter matrix of the target position, project each of the second pixels in the plurality of second pixels to the pixel coordinate system corresponding to the target position, and obtain a plurality of third pixel points in the pixel coordinate system. pixel;
在一个示例中,所述第二像素点与所述第三像素点之间的转换关系可用公式(2)表示如下:In an example, the conversion relationship between the second pixel point and the third pixel point can be expressed by formula (2) as follows:
其中,Xc、Yc、Zc分别表示目标物体在相机坐标系下的第二像素点;x、y分别表示目标物体在像素坐标系下的第三像素点;所述内部参数矩阵中的cx、cy分别表示相机坐标系的原点在图像上对应的像素坐标;所述内部参数矩阵中的fx、fy可表示相机焦距。Among them, X c , Y c , and Z c respectively represent the second pixel point of the target object in the camera coordinate system; x and y represent the third pixel point of the target object in the pixel coordinate system; c x and cy respectively represent the pixel coordinates corresponding to the origin of the camera coordinate system on the image; f x and f y in the internal parameter matrix may represent the focal length of the camera.
此外,在通过所述内部参数矩阵将所述第二像素点转换至所述第三像素点的过程中,还可考虑畸变因素。例如,在所述内部参数矩阵中加入径向畸变系数以及切向畸变系数。加入畸变因素可以改善理论计算的像素点与实际情况的像素点存在偏移以及变形的问题。在实际应用中,畸变因素是否加入可以根据实际需要进行确定,本申请并不限定。In addition, distortion factors may also be considered during the process of converting the second pixel point to the third pixel point through the internal parameter matrix. For example, a radial distortion coefficient and a tangential distortion coefficient are added to the internal parameter matrix. Adding distortion factors can improve the problem of offset and deformation between theoretically calculated pixels and actual pixels. In practical applications, whether to add distortion factors can be determined according to actual needs, which is not limited in this application.
需要说明的是,公式(2)是基于相机投影原理进行的。公式(2)的变换过程也可看作正变换过程的一部分。在本申请中,所述第一像素点可以是目标物体在真实世界中的坐标,该坐标是三维的;所述第二像素点可以是中间坐标,该坐标也是三维的;所述第三像素点可以是目标物体在拍摄的图像中的坐标,该坐标是二维的。It should be noted that the formula (2) is based on the principle of camera projection. The transformation process of formula (2) can also be regarded as a part of the forward transformation process. In the present application, the first pixel point may be the coordinates of the target object in the real world, and the coordinates are three-dimensional; the second pixel point may be an intermediate coordinate, and the coordinates are also three-dimensional; the third pixel point The point may be the coordinates of the target object in the captured image, and the coordinates are two-dimensional.
步骤S3334:基于所述多个第三像素点,形成与所述目标位置相匹配的仿真图像。Step S3334: Based on the plurality of third pixel points, a simulation image matching the target position is formed.
其中,所述仿真图像可用于所述第二车辆。例如,可将所述仿真图像输入至第二车辆的数据处理单元中,以便该数据处理单元利用所述仿真图像进行训练、验证或测试等。Wherein, the simulated image can be used for the second vehicle. For example, the simulation image may be input into the data processing unit of the second vehicle, so that the data processing unit uses the simulation image for training, verification or testing.
需要说明的是,在对各所述深度图进行处理的过程中,假设目标位置的虚拟视角外部参数的方向一致,考虑到外部参数r11-r33在参与目标位置的虚拟相机以及目标相机都是一致的,因此只需考虑视角的平移差距,而无需考虑视角的旋转差异。为了便于理解,也可以考虑所述至少一路第二相机是与所述目标位置较为靠近的相机,因此所述至少一路第二相机中各第二相机的朝向与所述目标相机的朝向类似,在左右上下的位置有所偏差,从而导致视角偏差,因此对各所述深度图进行处理的过程中实现平移即可。It should be noted that, in the process of processing each of the depth maps, it is assumed that the direction of the external parameters of the virtual viewing angle of the target position is consistent, considering that the external parameters r11-r33 are consistent in the virtual camera participating in the target position and the target camera , so only the translational difference of the viewing angle is considered, and there is no need to consider the rotational difference of the viewing angle. For ease of understanding, it may also be considered that the at least one second camera is a camera that is relatively close to the target position, so the orientation of each second camera in the at least one second camera is similar to the orientation of the target camera. The positions of the left, right, up and down are deviated, which leads to the deviation of the viewing angle. Therefore, it is only necessary to realize the translation during the processing of each of the depth maps.
步骤S4:将所述仿真图像回注至所述第二车辆的数据处理单元。Step S4: injecting the simulation image back into the data processing unit of the second vehicle.
进一步地,将所述仿真图像回注至所述第二车辆的数据处理单元,包括:Further, injecting the simulation image back into the data processing unit of the second vehicle includes:
步骤S41:采用双线性差值算法和/或图像修复算法对所述仿真图像中缺失的像素点进行填充,得到与所述目标位置对应的拟合图像;Step S41: using a bilinear difference algorithm and/or an image restoration algorithm to fill in missing pixels in the simulation image to obtain a fitting image corresponding to the target position;
其中,所述仿真图像的每个像素点可对应有一个固定的RGB值(例如灰阶)。像素点的缺失或损坏可采用双线性差值算法和/或图像修复(image inpainting)算法进行修复。示例性的,还可根据语义信息对所述仿真图像进行补全。所述图像修复算法可基于生成对抗网络(Generative Adversarial Networks,GAN)进行。Wherein, each pixel of the simulated image may correspond to a fixed RGB value (eg, grayscale). The missing or damaged pixels can be repaired by using a bilinear difference algorithm and/or an image inpainting algorithm. Exemplarily, the simulation image may also be completed according to semantic information. The image restoration algorithm can be performed based on Generative Adversarial Networks (GAN).
需要说明的是,采用双线性差值算法与采用图像修复算法可以同时进行,也可以择一进行。本领域技术人员可以理解的是,所述双线性差值算法以及所述图像修复算法有多种实现形式,本申请对于所述双线性差值算法以及所述图像修复算法的具体实现并不限定。It should be noted that the use of the bilinear difference algorithm and the image repair algorithm can be performed simultaneously, or alternatively. Those skilled in the art can understand that the bilinear difference algorithm and the image restoration algorithm have multiple implementation forms, and the specific implementation of the bilinear difference algorithm and the image restoration algorithm is not discussed in this application. Not limited.
步骤S42:将所述拟合图像回注至所述第二车辆的数据处理单元。Step S42: injecting the fitting image back into the data processing unit of the second vehicle.
其中,所述拟合图像可以有一幅,也可以有多幅。在所述拟合图像有多幅的情况下,可以对所述拟合图像进行进一步的融合,以使目标位置处的拟合图像更加接近实际情况。Wherein, there may be one or more than one fitting image. In the case that there are multiple fitting images, further fusion may be performed on the fitting images, so that the fitting image at the target position is closer to the actual situation.
进一步地,将所述拟合图像回注至所述第二车辆的数据处理单元,可包括:Further, injecting the fitting image back into the data processing unit of the second vehicle may include:
步骤S421:将形成的与所述目标相机相匹配的拟合图像输入至第二车辆的数据处理单元进行神经网络训练,得到与所述目标位置对应的训练值;Step S421: Input the formed fitting image that matches the target camera to the data processing unit of the second vehicle to perform neural network training to obtain a training value corresponding to the target position;
示例性的,所述第二车辆可包括工控机以及注入设备。所述拟合图像可作为视频数据经过工控机(也称实时机)解码,然后经过注入设备(例如视频注入板卡)注入至数据处理单元中。数据处理单元的算法可以基于神经网络模型,该神经网络模型以所述拟合图像为输入进行神经网络训练,进而得到与所述目标位置对应的训练值。Exemplarily, the second vehicle may include an industrial computer and injection equipment. The fitting image can be decoded as video data by an industrial computer (also called a real-time machine), and then injected into the data processing unit by an injection device (such as a video injection board). The algorithm of the data processing unit may be based on a neural network model, and the neural network model uses the fitting image as an input to perform neural network training, and then obtain a training value corresponding to the target position.
步骤S422:将所述训练值与所述目标位置安装的相机采集的真实值进行比较,得到与该训练值对应的比较结果;Step S422: Comparing the training value with the real value collected by the camera installed at the target position to obtain a comparison result corresponding to the training value;
在一个示例中,可将所述训练值与所述目标位置安装的相机采集的真实值进行对比,所述比较结果可以为相等或不等。在所述比较结果为相等的情况下,说明所述拟合图像能够较好拟合所述目标位置安装的相机实际采集的图像;在所述比较结果为不等的情况下,说明所述拟合图像不能够较好拟合所述目标位置安装的相机实际采集的图像,需要重新调整所述神经网络模型。In an example, the training value may be compared with a real value collected by a camera installed at the target location, and the comparison result may be equal or not equal. If the comparison results are equal, it means that the fitting image can better fit the image actually collected by the camera installed at the target position; If the combined image cannot better fit the image actually collected by the camera installed at the target position, the neural network model needs to be readjusted.
步骤S423:根据所述比较结果调整所述数据处理单元中的神经网络模型。Step S423: Adjust the neural network model in the data processing unit according to the comparison result.
其中,所述神经网络模型可包括诸如DNN、CNN、LSTM、ResNet等网络模型,本申请对于神经网络模型的类型并不限定。Wherein, the neural network model may include network models such as DNN, CNN, LSTM, ResNet, etc., and the present application does not limit the type of the neural network model.
一种实施例中,由于仿真图像(或拟合图像)与真实图像间通常会有较多重合部分,进而可基于重合部分将仿真图像(或拟合图像)与各第二相机采集到的真实图像拼接在一起作为一个待校验图像,同时,各真实图像也可能会有较多重合部分,所以,可基于重合部分将各第二相机采集到的真实图像拼接在一起作为参考图像;然后比较待校验图像与参考图像的相似度,得到用于表征相似度的相似度评估信息。示例性的,若相似度评估信息与相似度呈正相关,则可在相似度评估信息高于相似度阈值时,将仿真图像回注,否则不进行回注;若相似度评估信息与相似度呈负相关,则可在相似度评估信息低于相似度阈值时,将仿真图像回注,否则不进行回注。此外,仿真图像(或拟合图像)可与其他真实图像同步回注。In one embodiment, since there are usually many overlapped parts between the simulated image (or fitted image) and the real image, the simulated image (or fitted image) and the real image collected by each second camera can be compared based on the overlapped part. The images are stitched together as an image to be verified. At the same time, each real image may have many overlapping parts. Therefore, based on the overlapping parts, the real images collected by each second camera can be stitched together as a reference image; then compare The similarity between the image to be verified and the reference image is obtained to obtain similarity evaluation information for representing the similarity. Exemplarily, if the similarity evaluation information is positively correlated with the similarity, the simulation image can be re-injected when the similarity evaluation information is higher than the similarity threshold, otherwise no re-injection is performed; if the similarity evaluation information is positively correlated with the similarity Negative correlation, when the similarity evaluation information is lower than the similarity threshold, the simulation image will be re-injected, otherwise no re-injection will be performed. In addition, simulated images (or fitted images) can be back-injected synchronously with other real images.
通过以上基于相似度来判断是否将仿真图像或拟合图像回注的方案,本申请实施例能够避免将仿真或拟合结果不佳(例如准确性不佳)的图像回注,解决因回注可能导致数据处理单元所观测到的同步回注的各图像不匹配的问题,从而提升算法的训练、验证、测试效果。Through the above scheme of judging whether to re-inject the simulation image or the fitting image based on the similarity, the embodiment of the present application can avoid re-injecting the image with a poor simulation or fitting result (for example, poor accuracy), and solve the problem caused by re-injection. It may lead to the mismatch of images observed by the data processing unit for synchronous re-injection, thereby improving the training, verification, and testing effects of the algorithm.
需要说明的是,以上方案可主要用于无需对仿真图像缺失的像素点进行填充的情形下,也不排除应用于有缺失像素点的情形。It should be noted that the above solution can be mainly used in situations where there is no need to fill missing pixels in the simulation image, and the application to situations with missing pixels is not excluded.
一种示例中,也可兼顾有像素点需要填充的情形与无缺失像素点需要填充的情形。此时,由于有缺失时图像间的差异可能是因为填充时所采用算法的缺陷或其他相关缺陷带来的,所以,也可针对不同情形采用不同的相似度阈值。例如,对于缺失像素点的情形,若需要对缺失的像素点进行填充(即待注入的图像为拟合图像),则可确定相似度阈值为第一相似度阈值,若无缺失像素点需要填充(即待注入的图像为仿真图像),则可确定相似度阈值为第二相似度阈值。进一步地,若相似度评估信息与相似度呈正相关,则第一相似度阈值小于第二相似度阈值,若相似度评估信息与相似度呈负相关,则第一相似度阈值大于第二相似度阈值。In one example, the situation that there are pixels that need to be filled and the situation that there are no missing pixels that need to be filled can also be considered. At this time, since the difference between images when there is a deletion may be caused by the defect of the algorithm used for filling or other related defects, different similarity thresholds may also be used for different situations. For example, in the case of missing pixels, if the missing pixels need to be filled (that is, the image to be injected is a fitted image), the similarity threshold can be determined to be the first similarity threshold, and if there are no missing pixels that need to be filled (that is, the image to be injected is a simulated image), then the similarity threshold may be determined to be the second similarity threshold. Further, if the similarity evaluation information is positively correlated with the similarity, the first similarity threshold is smaller than the second similarity threshold, and if the similarity evaluation information is negatively correlated with the similarity, the first similarity threshold is greater than the second similarity threshold threshold.
综上所述,本申请利用坐标映射关系以及深度图,通过对第一相机数据以及第二相机数据的适配调整,来拟合目标位置视角的仿真图像,使得第一相机数据能够适配于第二车辆,进一步丰富第二车辆的第二相机数据,增强第二相机数据的适配性,扩展第二相机数据的应用广度。此外,适配后的仿真图像能够用于神经网络训练,进而提高神经网络的训练精度,同时基于第一相机数据以及第二相机数据快速适配新车型,也提升了新车型的研发和测试效率。To sum up, this application uses the coordinate mapping relationship and the depth map to fit the simulation image of the target position angle of view through the adaptation and adjustment of the first camera data and the second camera data, so that the first camera data can be adapted to the For the second vehicle, further enrich the second camera data of the second vehicle, enhance the adaptability of the second camera data, and expand the application range of the second camera data. In addition, the adapted simulation images can be used for neural network training, thereby improving the training accuracy of the neural network. At the same time, based on the first camera data and the second camera data, new models can be quickly adapted, which also improves the efficiency of new model development and testing. .
图3示出本申请实施例的目标相机的仿真图像回注装置的框图。Fig. 3 shows a block diagram of a device for reinjecting a simulated image of a target camera according to an embodiment of the present application.
如图3所示,本申请实施例的目标相机的仿真图像回注装置30可包括:As shown in FIG. 3 , the simulated
相机数据获取模块31,用于获取第一车辆的第一相机集的第一相机数据以及第二车辆的第二相机集的第二相机数据,所述第一相机数据包括各第一相机的外部参数矩阵,所述第一相机集包括预设的目标相机,所述第二相机集包括多个第二相机,所述目标相机的位置与所述第二车辆上预设的目标位置相同,所述目标位置与各所述第二相机的位置不同,所述第二相机数据包括第二相机的外部参数矩阵与所述第二相机拍摄到的真实图像;The camera
相机确定模块32,用于根据所述目标相机的外部参数矩阵以及各所述第二相机的外部参数矩阵,确定与所述目标位置相邻的至少一路第二相机;A
图像形成模块33,用于基于所述至少一路第二相机各自对应的真实图像、深度图以及所述目标相机对应的第一相机数据,形成与所述目标位置相匹配的仿真图像;An image forming module 33, configured to form a simulation image matching the target position based on the real images and depth maps corresponding to the at least one second camera, and the first camera data corresponding to the target camera;
回注模块34,用于将所述仿真图像回注至所述第二车辆的数据处理单元。The
此外,本申请提供了一种计算机可读介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述目标位置的仿真图像回注方法。In addition, the present application provides a computer-readable medium, on which a computer program is stored, and when the computer program is executed by a processor, the simulation image reinjection method for the target position is realized.
进一步地,本申请还提供了一种电子设备,包括:一个或多个处理器;存储装置,用于存储一个或多个程序,当所述一个或多个程序被所述一个或多个处理器执行时,使得所述一个或多个处理器实现所述目标位置的仿真图像回注方法。Further, the present application also provides an electronic device, including: one or more processors; a storage device for storing one or more programs, when the one or more programs are processed by the one or more When executed by the processor, the one or more processors implement the simulated image reinjection method of the target position.
图4示出本申请实施例的电子设备的结构示意图。Fig. 4 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
如图4所示,所述电子设备可用于实现所述目标位置的仿真图像回注方法。具体的,所述电子设备可以包括计算机系统。需要说明的是,图3示出的电子设备仅是一个示例,不应对本申请实施例的功能和使用范围带来任何限制。As shown in FIG. 4 , the electronic device can be used to realize the simulated image reinjection method of the target position. Specifically, the electronic device may include a computer system. It should be noted that the electronic device shown in FIG. 3 is only an example, and should not limit the functions and scope of use of this embodiment of the present application.
如图3所示,计算机系统包括中央处理单元(Central Processing Unit,CPU)1801,其可以根据存储在只读存储器(Read-Only Memory,ROM)1802中的程序或者从存储部分1808加载到随机访问存储器(Random Access Memory,RAM)1803中的程序而执行各种适当的动作和处理,例如执行上述实施例中所述的方法。在RAM 1803中,还存储有系统操作所需的各种程序和数据。CPU 1801、ROM 1802以及RAM 1803通过总线1804彼此相连。输入/输出(Input/Output,I/O)接口1805也连接至总线1804。As shown in FIG. 3 , the computer system includes a central processing unit (Central Processing Unit, CPU) 1801, which can be randomly accessed according to a program stored in a read-only memory (Read-Only Memory, ROM) 1802 or loaded from a
以下部件连接至I/O接口1805:包括键盘、鼠标等的输入部分1806;包括诸如阴极射线管(Cathode Ray Tube,CRT)、液晶显示器(Liquid Crystal Display,LCD)等以及扬声器等的输出部分1807;包括硬盘等的存储部分1808;以及包括诸如LAN(Local AreaNetwork,局域网)卡、调制解调器等的网络接口卡的通信部分1809。通信部分1809经由诸如因特网的网络执行通信处理。驱动器1810也根据需要连接至I/O接口1805。可拆卸介质1811,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器1810上,以便于从其上读出的计算机程序根据需要被安装入存储部分1808。The following components are connected to the I/O interface 1805: an
特别地,根据本申请的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本申请的实施例包括一种计算机程序产品,其包括承载在计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的计算机程序。在这样的实施例中,该计算机程序可以通过通信部分1809从网络上被下载和安装,和/或从可拆卸介质1811被安装。在该计算机程序被中央处理单元(CPU)1801执行时,执行本申请的系统中限定的各种功能。In particular, according to the embodiments of the present application, the processes described above with reference to the flowcharts can be implemented as computer software programs. For example, the embodiments of the present application include a computer program product, which includes a computer program carried on a computer-readable medium, where the computer program includes a computer program for executing the method shown in the flowchart. In such an embodiment, the computer program may be downloaded and installed from a network via
需要说明的是,本申请实施例所示的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本申请中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本申请中,计算机可读的信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的计算机程序。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读的信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的计算机程序可以用任何适当的介质传输,包括但不限于:无线、有线等等,或者上述的任意合适的组合。It should be noted that the computer-readable medium shown in the embodiment of the present application may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable Read-Only Memory (Erasable Programmable Read Only Memory, EPROM), flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable The combination. In the present application, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In this application, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, in which a computer-readable computer program is carried. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. . A computer program embodied on a computer readable medium can be transmitted using any appropriate medium, including but not limited to: wireless, wired, etc., or any suitable combination of the above.
附图中的流程图和框图,图示了按照本申请各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。其中,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,上述模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图或流程图中的每个方框、以及框图或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. Wherein, each block in the flowchart or block diagram may represent a module, a program segment, or a part of the code, and the above-mentioned module, program segment, or part of the code includes one or more executable instruction. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block in the block diagrams or flowchart illustrations, and combinations of blocks in the block diagrams or flowchart illustrations, can be implemented by a dedicated hardware-based system that performs the specified function or operation, or can be implemented by a A combination of dedicated hardware and computer instructions.
描述于本申请实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现,所描述的单元也可以设置在处理器中。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。The units described in the embodiments of the present application may be implemented by software or by hardware, and the described units may also be set in a processor. Wherein, the names of these units do not constitute a limitation of the unit itself under certain circumstances.
作为另一方面,本申请还提供了一种计算机可读介质,该计算机可读介质可以是上述实施例中描述的电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被一个该电子设备执行时,使得该电子设备实现上述实施例中所述的方法。As another aspect, the present application also provides a computer-readable medium. The computer-readable medium may be included in the electronic device described in the above-mentioned embodiments; or it may exist independently without being assembled into the electronic device. middle. The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by an electronic device, the electronic device is made to implement the methods described in the above-mentioned embodiments.
应当注意,尽管在上文详细描述中提及了用于动作执行的设备的若干模块或者单元,但是这种划分并非强制性的。实际上,根据本申请的实施方式,上文描述的两个或更多模块或者单元的特征和功能可以在一个模块或者单元中具体化。反之,上文描述的一个模块或者单元的特征和功能可以进一步划分为由多个模块或者单元来具体化。It should be noted that although several modules or units of the device for action execution are mentioned in the above detailed description, this division is not mandatory. Actually, according to the embodiment of the present application, the features and functions of two or more modules or units described above may be embodied in one module or unit. Conversely, the features and functions of one module or unit described above can be further divided to be embodied by a plurality of modules or units.
通过以上的实施方式的描述,本领域的技术人员易于理解,这里描述的示例实施方式可以通过软件实现,也可以通过软件结合必要的硬件的方式来实现。因此,根据本申请实施方式的技术方案可以以软件产品的形式体现出来,该软件产品可以存储在一个非易失性存储介质(可以是CD-ROM,U盘,移动硬盘等)中或网络上,包括若干指令以使得一台计算设备(可以是个人计算机、服务器、触控终端、或者网络设备等)执行根据本申请实施方式的方法。Through the description of the above implementations, those skilled in the art can easily understand that the example implementations described here can be implemented by software, or by combining software with necessary hardware. Therefore, the technical solutions according to the embodiments of the present application can be embodied in the form of software products, which can be stored in a non-volatile storage medium (which can be CD-ROM, U disk, mobile hard disk, etc.) or on the network , including several instructions to make a computing device (which may be a personal computer, server, touch terminal, or network device, etc.) execute the method according to the embodiment of the present application.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其他实施例的相关描述。In the foregoing embodiments, the descriptions of each embodiment have their own emphases, and for parts not described in detail in a certain embodiment, reference may be made to relevant descriptions of other embodiments.
以上对本申请实施例所提供的目标位置的仿真图像回注方法及其相关设备进行了详细介绍,本文中应用了具体个例对本申请的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的技术方案及其核心思想;本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例的技术方案的范围。The simulation image reinjection method of the target position provided by the embodiment of the present application and its related equipment have been introduced in detail above. In this paper, specific examples are used to illustrate the principle and implementation of the present application. The description of the above embodiment is only used To help understand the technical solutions and core ideas of the present application; those skilled in the art should understand that they can still modify the technical solutions described in the foregoing embodiments, or perform equivalent replacements for some of the technical features; and these The modification or replacement does not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.
Claims (10)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211538620.1A CN115797442B (en) | 2022-12-01 | 2022-12-01 | Simulation image back-injection method for target position and related equipment |
CN202410621913.9A CN118587667A (en) | 2022-12-01 | 2022-12-01 | Simulation image generation method, simulation image back-injection method and related devices and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211538620.1A CN115797442B (en) | 2022-12-01 | 2022-12-01 | Simulation image back-injection method for target position and related equipment |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410621913.9A Division CN118587667A (en) | 2022-12-01 | 2022-12-01 | Simulation image generation method, simulation image back-injection method and related devices and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115797442A true CN115797442A (en) | 2023-03-14 |
CN115797442B CN115797442B (en) | 2024-06-07 |
Family
ID=85444986
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410621913.9A Pending CN118587667A (en) | 2022-12-01 | 2022-12-01 | Simulation image generation method, simulation image back-injection method and related devices and equipment |
CN202211538620.1A Active CN115797442B (en) | 2022-12-01 | 2022-12-01 | Simulation image back-injection method for target position and related equipment |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202410621913.9A Pending CN118587667A (en) | 2022-12-01 | 2022-12-01 | Simulation image generation method, simulation image back-injection method and related devices and equipment |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN118587667A (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3034555A1 (en) * | 2015-04-03 | 2016-10-07 | Continental Automotive France | METHOD FOR DETERMINING THE DIRECTION OF THE MOVEMENT OF A MOTOR VEHICLE |
CN113868873A (en) * | 2021-09-30 | 2021-12-31 | 重庆长安汽车股份有限公司 | Automatic driving simulation scene expansion method and system based on data reinjection |
CN114299230A (en) * | 2021-12-21 | 2022-04-08 | 中汽创智科技有限公司 | A data generation method, device, electronic device and storage medium |
CN114723820A (en) * | 2022-03-09 | 2022-07-08 | 福思(杭州)智能科技有限公司 | Road data multiplexing method, assisted driving system, device and computer equipment |
CN114821497A (en) * | 2022-02-24 | 2022-07-29 | 广州文远知行科技有限公司 | Method, device, device and storage medium for determining the position of a target |
-
2022
- 2022-12-01 CN CN202410621913.9A patent/CN118587667A/en active Pending
- 2022-12-01 CN CN202211538620.1A patent/CN115797442B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR3034555A1 (en) * | 2015-04-03 | 2016-10-07 | Continental Automotive France | METHOD FOR DETERMINING THE DIRECTION OF THE MOVEMENT OF A MOTOR VEHICLE |
CN113868873A (en) * | 2021-09-30 | 2021-12-31 | 重庆长安汽车股份有限公司 | Automatic driving simulation scene expansion method and system based on data reinjection |
CN114299230A (en) * | 2021-12-21 | 2022-04-08 | 中汽创智科技有限公司 | A data generation method, device, electronic device and storage medium |
CN114821497A (en) * | 2022-02-24 | 2022-07-29 | 广州文远知行科技有限公司 | Method, device, device and storage medium for determining the position of a target |
CN114723820A (en) * | 2022-03-09 | 2022-07-08 | 福思(杭州)智能科技有限公司 | Road data multiplexing method, assisted driving system, device and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN118587667A (en) | 2024-09-03 |
CN115797442B (en) | 2024-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
TWI709107B (en) | Image feature extraction method and saliency prediction method including the same | |
CN111325693B (en) | A Large-scale Panoramic Viewpoint Synthesis Method Based on Single Viewpoint RGB-D Image | |
EP3794552A1 (en) | Motion compensation of geometry information | |
CN103024421B (en) | Method for synthesizing virtual viewpoints in free viewpoint television | |
CN103248911B (en) | A Virtual Viewpoint Rendering Method Based on Space-Time Combination in Multi-viewpoint Video | |
CN111047709B (en) | Binocular vision naked eye 3D image generation method | |
US20170272724A1 (en) | Apparatus and method for multi-view stereo | |
CN112991515B (en) | Three-dimensional reconstruction method, device and related equipment | |
CN111738223A (en) | Vehicle frame number image generation method, device, computer equipment and storage medium | |
CN111882655A (en) | Method, apparatus, system, computer device and storage medium for three-dimensional reconstruction | |
CN107707899A (en) | Multi-view image processing method, device and electronic equipment comprising moving target | |
CN113808033A (en) | Image document correction method, system, terminal and medium | |
CN116342677A (en) | A depth estimation method, device, vehicle and computer program product | |
CN115035580A (en) | Figure digital twinning construction method and system | |
CN119152134A (en) | Three-dimensional switch visualization method, device, equipment and medium | |
CN114494445A (en) | Video synthesis method and device and electronic equipment | |
CN115797442A (en) | Simulation image re-injection method of target position and related equipment | |
WO2025007933A1 (en) | Facial animation generation model training and facial animation | |
CN118570379A (en) | Method, device, equipment, medium and product for three-dimensional reconstruction of facilities | |
CN115861145B (en) | A method of image processing based on machine vision | |
CN117409149A (en) | Three-dimensional modeling method and system of beam method adjustment equation based on three-dimensional constraint | |
CN117218266A (en) | 3D white-mode texture map generation method, device, equipment and medium | |
KR102561903B1 (en) | AI-based XR content service method using cloud server | |
CN115830227A (en) | Three-dimensional modeling method, device, storage medium, electronic device and product | |
CN116310408A (en) | Method and device for establishing data association between event camera and frame camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |