CN115727873A - Sensor information processing method and device, electronic equipment and storage medium - Google Patents

Sensor information processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115727873A
CN115727873A CN202211423391.9A CN202211423391A CN115727873A CN 115727873 A CN115727873 A CN 115727873A CN 202211423391 A CN202211423391 A CN 202211423391A CN 115727873 A CN115727873 A CN 115727873A
Authority
CN
China
Prior art keywords
sensor
scene
imaging simulation
configuration information
checking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211423391.9A
Other languages
Chinese (zh)
Inventor
孙伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202211423391.9A priority Critical patent/CN115727873A/en
Publication of CN115727873A publication Critical patent/CN115727873A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The disclosure provides a sensor information processing method and device, electronic equipment and a storage medium, and relates to the technical field of artificial intelligence, in particular to the fields of computer vision and automatic driving. The specific implementation scheme is as follows: acquiring configuration information of a sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor; displaying an imaging simulation diagram of the sensor in a target scene, wherein the imaging simulation diagram comprises an image of a checking reference object; the imaging simulation diagram is generated based on the calibration parameters and virtual scene data of the target scene, and the virtual scene data comprises three-dimensional data of the check reference object. According to the embodiment of the disclosure, a user can determine whether the sensing effect of the sensor under the current configuration meets the checking requirement or not based on the imaging simulation diagram and the checking reference object therein, and the method has the advantages of comprehensive, visual and efficient checking and the like.

Description

Sensor information processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technology, and more particularly to the field of computer vision and autopilot.
Background
Autopilot has a high degree of dependence on various types of sensors (e.g., cameras, radar, etc.). The deployment location of the sensors is critical to restrict the perception capabilities of the sensors. For example, a camera is disposed close to the surface of the vehicle, and the surface of the vehicle may block the imaging of the camera, which affects the shooting range of the camera. Based on this, before confirming the deployment position of the sensor, the perception effect of the sensor needs to be checked.
Disclosure of Invention
The disclosure provides a sensor information processing method and device, an electronic device and a storage medium.
According to an aspect of the present disclosure, there is provided a method of processing sensor information, including:
acquiring configuration information of a sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor;
displaying an imaging simulation diagram of the sensor in a target scene, wherein the imaging simulation diagram comprises an image of a checking reference object; the imaging simulation diagram is generated based on the calibration parameters and virtual scene data of the target scene, and the virtual scene data comprises three-dimensional data of the check reference object.
According to another aspect of the present disclosure, there is provided a sensor information processing apparatus including:
the configuration information acquisition module is used for acquiring configuration information of the sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor;
the image display module is used for displaying an imaging simulation image of the sensor in a target scene, and the imaging simulation image comprises an image of a checking reference object; the imaging simulation diagram is generated based on the calibration parameters and virtual scene data of the target scene, and the virtual scene data comprises three-dimensional data of the check reference object.
According to another aspect of the present disclosure, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to cause the at least one processor to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing a computer to perform a method according to any one of the embodiments of the present disclosure.
According to another aspect of the present disclosure, a computer program product is provided, comprising a computer program which, when executed by a processor, implements a method according to any of the embodiments of the present disclosure.
According to the technical scheme, the imaging simulation graph is generated based on the acquired configuration information of the sensor, and the sensing area of the sensor in the target scene is presented in an imaging simulation mode. And the three-dimensional data of the checking reference object is set in the virtual scene data of the target scene, so that the imaging simulation diagram comprises the image of the checking reference object, and based on the image of the checking reference object, a user can determine whether the sensing effect of the sensor under the current configuration meets the checking requirement or not based on the imaging simulation diagram and the checking reference object therein, and the method has the advantages of comprehensive, visual and efficient checking and the like.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic flow diagram of a method of processing sensor information according to an embodiment of the present disclosure;
FIG. 2 is a schematic illustration of an imaging phantom of an embodiment of the disclosure;
FIG. 3 is a schematic diagram of another imaging phantom of an embodiment of the disclosure
FIG. 4 is a schematic diagram of a three-dimensional scene model corresponding to virtual scene data according to an embodiment of the present disclosure;
FIG. 5 is a schematic block diagram of a processing device of sensor information of an embodiment of the present disclosure;
fig. 6 is a schematic block diagram of a processing device of sensor information of another embodiment of the present disclosure;
fig. 7 is a schematic block diagram of a sensor information processing apparatus of a further embodiment of the present disclosure;
fig. 8 is a schematic block diagram of a sensor information processing apparatus of a further embodiment of the present disclosure;
fig. 9 is a schematic block diagram of a sensor information processing apparatus of a further embodiment of the present disclosure;
fig. 10 is a schematic block diagram of a sensor information processing apparatus of a further embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device used to implement a method of processing sensor information according to an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a processing method of sensor information according to an embodiment of the present disclosure. The method can be applied to a sensor information processing device, and the device can be deployed in electronic equipment such as terminal equipment, a server and the like. As shown in fig. 1, the method may include:
step S110, acquiring configuration information of a sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor;
and step S120, displaying an imaging simulation diagram of the sensor in the target scene, wherein the imaging simulation diagram comprises an image of the checking reference object. The imaging simulation diagram is generated based on calibration parameters and virtual scene data, and the virtual scene data comprises three-dimensional data of a check reference object.
Illustratively, the sensor may include a camera, radar, or the like, which may present the sensing result in an imaging manner.
The imaging process of the sensor can be simulated by a mathematical model which comprises an external reference model and an internal reference model. Among other things, the external reference model may be used to represent the conversion process between a three-dimensional scene in the world coordinate system (i.e., in the real world) and a three-dimensional scene in the camera coordinate system. The internal reference model may be used to represent the process of conversion between the image perceived by the sensor and the three-dimensional scene in the camera coordinate system. Since the parameters in the mathematical model are usually obtained by calibration, the parameters in the mathematical model may be referred to as calibration parameters of the sensor. That is, in the disclosed embodiment, the calibration parameters may include parameters in the internal reference model (i.e., internal references) and parameters in the external reference model (i.e., external references).
For example, the configuration information of the sensor may include one or more information for determining internal and external parameters. For example, the configuration information may include the internal parameter and the external parameter themselves, and may also include the model, focal length, installation position, etc. of the sensor, which may be used to obtain the information of the internal parameter and the external parameter.
Illustratively, configuration information for the sensors may be obtained in a manner that interacts with a user. For example, configuration information of the sensor is determined based on an input or selection by a user. For example, an input box may be provided on the user interface for the user to input the internal parameters of the sensor and the three-dimensional coordinates of the installation position of the sensor. For another example, a selection box may be provided on the user interface for the user to select the mounting location of the sensor among a plurality of mounting locations for which three-dimensional coordinates are known. In the arrangement checking scene of the vehicle sensors, each mounting position may be represented by a vehicle component in the target scene, for example, each mounting position may be a vehicle B-pillar (pillar between main driver side window glass and rear row side window glass) side front, a vehicle body side rear, or the like.
For example, the target scene may refer to a physical scene in which a three-dimensional scene model is built in advance, such as a ground library, a road, and the like, which are related to automatic driving. In the embodiment of the present disclosure, the virtual scene data may be data corresponding to a three-dimensional scene model. By acquiring data in a real physical scene in advance and modeling based on the acquired data, a three-dimensional scene model of a target scene can be obtained, and thus virtual scene data for bearing the three-dimensional scene model is obtained. For example, the virtual scene data may include three-dimensional coordinates of each position point in the target scene in a world coordinate system and corresponding imaging values.
After obtaining the configuration information of the sensor and determining the corresponding calibration parameters, the electronic device may perform imaging simulation based on the calibration parameters and the virtual scene data, that is, determine a corresponding relationship between each position point in the virtual scene data and each pixel in the image based on the calibration parameters, and project the imaging value in the virtual scene data on the corresponding pixel in the image according to the corresponding relationship, thereby obtaining the imaging simulation diagram.
According to the embodiment of the disclosure, the virtual scene data may include three-dimensional data of the check reference object, and accordingly, the imaging simulation diagram obtained based on the calibration parameters and the virtual scene data includes an image of the check parameters. In this way, the calibration reference can be displayed to the user together with the imaging simulation chart.
Illustratively, the check reference may be used to characterize the check parameter. The checking parameter may be a parameter for checking the perception effect, for example, the checking parameter may be a lighting height, a size of a blind area, a size of an overlapping area with other sensors, and the like. Specifically, in the case where the calibration parameter is illumination high, the calibration reference may be a scale for embodying illumination high. In the case where the checking parameter is an overlapping area of the current sensor and the other sensors, the checking reference may be a geometric body covering a sensing area of the other sensors and having a specific texture. According to the actual application requirement, the checking reference object can also be a checkerboard or other reference objects.
By adopting the method provided by the embodiment, the imaging simulation graph is generated based on the acquired configuration information of the sensor, and the sensing area of the sensor in the target scene is presented in an imaging simulation mode. And the three-dimensional data of the checking reference object is set in the virtual scene data of the target scene, so that the imaging simulation diagram comprises the image of the checking reference object, and based on the image of the checking reference object, a user can determine whether the sensing effect of the sensor under the current configuration meets the checking requirement or not based on the imaging simulation diagram and the checking reference object therein, and the method has the advantages of comprehensive, visual and efficient checking and the like.
In some embodiments, devices for installing sensors, such as vehicles and road infrastructure devices, may be set in the three-dimensional scene model corresponding to the virtual scene data. When the configuration information of the sensor indicates that the sensor is installed on the devices, the imaging simulation diagram can truly reflect the perception effect when the perception area of the sensor is blocked by the components of the devices. For example, if the sensor is installed on a vehicle, the blocking of the sensor mount, the hood, the body member, and the like on the vehicle will be reflected in the imaging phantom.
FIG. 2 shows a schematic diagram of an imaging simulation in accordance with an embodiment of the present disclosure. The imaging simulation diagram is an imaging simulation diagram of a vehicle body side rear camera, and it can be seen in the imaging simulation diagram that a rear view mirror 201 of a vehicle and a side front face 202 of the vehicle block a partial region in a target scene.
Fig. 3 shows a schematic diagram of another imaging simulation in accordance with an embodiment of the present disclosure. The imaging simulation diagram is an imaging simulation diagram of a front camera on the B-pillar side of the vehicle, and as can be seen in the imaging simulation diagram, a hole is formed in the front of the front camera on the B-pillar side of the vehicle, and a part of area in a target scene is shielded by a hole opening edge 301.
Therefore, the sensor is checked in an imaging simulation mode, and all shielding elements can be fully considered, so that design errors and repeated labor are avoided.
Optionally, in some embodiments of the present disclosure, the processing method of the sensor information may further include:
acquiring configuration information of a checking parameter; the configuration information of the checking parameters is used for determining the three-dimensional data of the checking reference object corresponding to the checking parameters, so as to obtain the virtual scene data based on the three-dimensional data of the checking reference object and the three-dimensional data of the target scene.
Illustratively, the configuration information of the checking parameter can be obtained by means of interaction with a user. For example, configuration information for the verification parameters is determined based on user input. The configuration information is in a form related to the type of the check parameter. For example, if the check parameter is a light height at a certain distance in front of the vehicle, the corresponding configuration information may be a distance value in front of the vehicle. For another example, if the calibration parameter is the size of the overlapping area between the sensor to be calibrated and another sensor, the corresponding configuration information may be information used for determining the sensing area of another sensor, such as the installation position and internal parameters of another sensor.
Illustratively, the position of the check reference object can be determined according to the configuration information of the check parameter, and the position is represented by three-dimensional data and comprises three-dimensional coordinates of each point in the check reference object; and obtaining virtual scene data based on the three-dimensional data of the check reference object and the three-dimensional data of the target scene. The three-dimensional data of the target scene comprises three-dimensional coordinates of each object in the three-dimensional scene model of the target scene. Illustratively, the check reference object may be set in the three-dimensional scene model of the target scene according to the three-dimensional data, so as to implement the virtual scene data obtained based on the three-dimensional data of the check reference object and the three-dimensional data of the target scene. For example, if the illumination configuration information indicates that the illumination is 3 meters ahead of the vehicle, the scale may be determined to be 3 meters ahead of the vehicle based on the configuration information, so as to determine the three-dimensional data of the scale, and then the scale is added to the three-dimensional scene model according to the three-dimensional data, so as to obtain the corresponding virtual scene data.
Fig. 4 is a schematic diagram of a three-dimensional scene model corresponding to virtual scene data according to an embodiment of the disclosure. As shown in fig. 4, in the three-dimensional scene model, a vehicle 401 is provided. If scales are disposed 3 meters from the front and rear of the vehicle 401 and 3 meters from both sides of the vehicle 401 using the arrangement information of the calibration parameters, the three-dimensional scene model includes a scale 402, a scale 403, a scale 404, and a scale 405 as shown in fig. 4. By these scales, the illumination heights of the sensors at 3 meters from the front and rear of the vehicle and at 3 meters from both sides of the vehicle can be visually seen.
In the above embodiment, the checking parameters are configurable, and a user can input configuration information according to actual requirements to customize and configure the checking reference object for representing the checking parameters, so that information conforming to the actual requirements is displayed in the imaging simulation diagram, repeated labor of the user is avoided, and checking efficiency is improved.
Optionally, in some embodiments of the present disclosure, the virtual scene data is also configurable. For example, the processing method of the sensor information may further include: the configuration information of the virtual scene data is obtained, and the configuration information may be used to select the virtual scene data of the target scene from the virtual scene data of the plurality of scenes stored in advance, for example, the configuration information may be an identifier of the target scene, such as a garage 1, a garage 2, a road segment 1, and the like.
Optionally, in some embodiments of the present disclosure, profiles may be employed to control various types of parameters. Specifically, on the basis of the foregoing embodiment, the method for processing sensor information may further include: determining a sensor configuration file based on the configuration information of the sensor, wherein the sensor configuration file is used for representing calibration parameters of the sensor; and generating an imaging simulation diagram based on the scene configuration file and the sensor configuration file, wherein the scene configuration file is used for representing virtual scene data.
That is, the calibration parameters of the sensor and the virtual scene data are loaded by using corresponding configuration files. Under the condition of changing the configuration information of the sensor or switching scenes, the configuration of the imaging simulation can be quickly adjusted by modifying or switching the configuration file, so that the imaging simulation diagram corresponding to the new configuration is quickly generated, and the checking efficiency is improved.
Optionally, in some embodiments of the present disclosure, the sensory effect of the sensor may be demonstrated in multiple display modes. Specifically, in the method for processing sensor information, displaying an imaging simulation diagram of a sensor may include:
in response to a selection operation for an ith display mode in the N display modes, displaying an imaging simulation diagram of the sensor in the target scene corresponding to the ith display mode; the imaging simulation diagram corresponding to the ith display mode is obtained based on the calibration parameters and the virtual scene data corresponding to the ith display mode, N is an integer larger than or equal to 2, and i is a positive integer smaller than or equal to N.
The ith display mode may be any one of the N display modes. That is, for a user's selection operation of a certain display mode, the electronic device may perform imaging simulation using virtual scene data corresponding to the display mode, thereby obtaining an imaging simulation diagram corresponding to the display mode, and displaying the imaging simulation diagram.
In the embodiment of the present disclosure, for the same target scene, virtual scene data corresponding to different display modes is set, for example, virtual physical data corresponding to multiple display modes is set in the same garage or the same road segment. Optionally, the virtual scene data corresponding to different display modes have different presentation effects on the target scene.
The embodiment supports the display of the imaging simulation diagram by adopting a plurality of display modes, so that a user can select the corresponding display mode according to actual requirements, the user can check the sensing effect of the sensor quickly, and the checking efficiency is improved.
In an example, the processing method of the sensor information may further include: obtaining a first scene map of a target scene based on a panoramic image acquired by a panoramic camera in the target scene; obtaining virtual scene data corresponding to the jth display mode in the N display modes based on the first scene map; j is a positive integer less than or equal to N.
Illustratively, a panoramic image captured by a panoramic camera may be employed as the first scene map of the target scene. The virtual scene data obtained based on the first scene map can contain more real and complete scene information, so that the imaging simulation map can reflect more real and complete sensing areas of the sensor, and the accuracy of sensor checking is improved.
In an example, the processing method of the sensor information may further include: splicing at least two images acquired from different directions by at least two cameras in a target scene to obtain a second scene map of the target scene; obtaining virtual scene data corresponding to the kth display mode in the N display modes based on the second scene map; wherein k is a positive integer less than or equal to N.
Illustratively, at least two images can be adopted for fish-eye stitching, and the fish-eye stitched images are used as the second scene map of the target scene.
Optionally, the at least two cameras may be deployed on a particular vehicle. For example, in a vehicle development process, a second vehicle similar to the first vehicle to be developed may be selected, and the at least two cameras may be deployed on the second vehicle, so that images captured by the at least two cameras are used to obtain a second scene map for building the three-dimensional scene model. And obtaining virtual scene data by using the second scene map, and further performing imaging simulation by using the virtual scene data when arranging and designing the sensor on the first vehicle. Because the virtual scene data is acquired by using the cameras on similar vehicle types, the imaging simulation diagram obtained based on the method is closer to the perception result of the sensor on the first vehicle.
Therefore, in the above example, the deployment positions of at least two cameras can be flexibly set according to the deployment positions of the sensors, so that the virtual scene data obtained based on the second scene map better meets the application requirements, and accordingly, the imaging simulation map is closer to the sensing result of the sensors in practical application, and the accuracy of sensor checking is improved.
Optionally, in some embodiments of the present disclosure, the processing method of the sensor information may further include: acquiring vehicle speed information; responding to the vehicle speed information, and displaying a driving simulation video of the sensor; the driving simulation video comprises M imaging simulation graphs corresponding to M driving moments respectively, wherein M is an integer greater than or equal to 2.
For example, the vehicle speed information may be the speed of the vehicle in which the sensor is located. Therefore, the vehicle speed information is the moving speed of the sensor.
Alternatively, the vehicle speed information may be obtained by interacting with the user. For example, the vehicle speed information is input by the user.
According to the embodiment, when the user inputs the vehicle speed information, a plurality of driving times can be determined according to the vehicle speed information, and the corresponding imaging simulation diagrams are respectively generated aiming at each driving time, so that the corresponding driving simulation videos are obtained. That is to say, support and carry out driving simulation based on the speed to more be favorable to the user to check the perception effect of sensor, improve the check integrality.
Optionally, on the basis of the foregoing embodiment, the method for processing sensor information may further include: determining external parameters of the sensor at each driving moment in the M driving moments based on the vehicle speed information and the configuration information of the sensor; and obtaining an imaging simulation diagram corresponding to each driving moment based on the internal parameters of the sensor, the external parameters of each driving moment and the virtual scene data.
Under the influence of the vehicle speed, the positions of the sensors at different driving moments are different, so that the external parameters of the sensors are changed accordingly. According to the embodiment, the external parameters of the sensor at each driving moment are determined according to the vehicle speed information, so that the corresponding imaging simulation diagram can be generated by using the external parameters at each driving moment, the authenticity of the driving simulation video is improved, and the arrangement of the sensor can be checked more accurately.
As can be seen, according to the method in the embodiment of the present disclosure, an imaging simulation diagram is generated based on the acquired configuration information of the sensor, and the sensing area of the sensor is presented in an imaging simulation manner. And the three-dimensional data of the checking reference object is set in the virtual scene data, so that the imaging simulation diagram comprises the image of the checking reference object, and based on the image of the checking reference object, a user can determine whether the sensing effect of the sensor under the current configuration meets the checking requirement or not based on the imaging simulation diagram and the checking reference object therein, and the method has the advantages of comprehensive, visual and efficient checking and the like.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the good customs of the public order.
According to an embodiment of the present disclosure, the present disclosure further provides a sensor information processing apparatus for implementing the above method. Fig. 5 shows a schematic block diagram of a processing device for sensor information provided by an embodiment of the present disclosure.
A configuration information obtaining module 510, configured to obtain configuration information of the sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor;
an image display module 520, configured to display an imaging simulation diagram of the sensor in the target scene, where the imaging simulation diagram includes an image of the calibration reference; the imaging simulation diagram is generated based on calibration parameters and virtual scene data of a target scene, and the virtual scene data comprises three-dimensional data of a check reference object.
In some embodiments, the configuration information obtaining module 510 is further configured to:
acquiring configuration information of a checking parameter; the configuration information of the checking parameters is used for determining the three-dimensional data of the checking reference object corresponding to the checking parameters, so as to obtain the virtual scene data based on the three-dimensional data of the checking reference object and the three-dimensional data of the target scene.
In some embodiments, on the basis of fig. 5, as shown in fig. 6, the processing device of the sensor information further includes:
a configuration information processing module 610 for determining a sensor configuration file based on configuration information of the sensor; the sensor configuration file is used for representing calibration parameters of the sensor;
a first image generation module 620, configured to generate an imaging simulation diagram based on the scene configuration file and the sensor configuration file; wherein the scene configuration file is used to characterize the virtual scene data.
In some embodiments, the image display module 520 is specifically configured to:
in response to a selection operation for an ith display mode in the N display modes, displaying an imaging simulation diagram of the sensor in the target scene corresponding to the ith display mode; the imaging simulation diagram corresponding to the ith display mode is obtained based on the calibration parameters and the virtual scene data corresponding to the ith display mode, N is an integer larger than or equal to 2, and i is a positive integer smaller than or equal to N.
In some embodiments, on the basis of fig. 5, as shown in fig. 7, the processing device of the sensor information further includes:
a first image processing module 710, configured to obtain a first scene map of a target scene based on a panoramic image acquired by a panoramic camera in the target scene;
a first map processing module 720, configured to obtain, based on the first scene map, virtual scene data corresponding to a jth display mode of the N display modes; wherein j is a positive integer less than or equal to N.
In some embodiments, on the basis of fig. 7, as shown in fig. 8, the sensor information processing apparatus further includes:
the second image processing module 810 is configured to obtain a second scene map of the target scene by stitching based on at least two images acquired by at least two cameras in the target scene from different orientations;
a second map processing module 820, configured to obtain, based on the second scene map, virtual scene data corresponding to a kth display mode of the N display modes; wherein k is a positive integer less than or equal to N.
In some embodiments, on the basis of fig. 5, as shown in fig. 9, further includes:
a vehicle speed information obtaining module 910, configured to obtain vehicle speed information;
the video display module 920 is used for responding to the vehicle speed information and displaying the driving simulation video of the sensor; the driving simulation video comprises M imaging simulation graphs corresponding to M driving moments respectively, wherein M is an integer greater than or equal to 2.
In some embodiments, on the basis of fig. 9, as shown in fig. 10, the method further includes:
the vehicle speed information processing module 1010 is used for determining external parameters of the sensor at each driving time in the M driving times based on the vehicle speed information and the configuration information of the sensor;
and a second image generation module 1020, configured to obtain an imaging simulation diagram corresponding to each driving time based on the internal parameters of the sensor, the external parameters of each driving time, and the virtual scene data.
For a description of specific functions and examples of each module and each sub-module of the apparatus in the embodiment of the present disclosure, reference may be made to the related description of the corresponding steps in the foregoing method embodiments, and details are not repeated here.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 11 shows a schematic block diagram of an example electronic device 1100 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital assistants, cellular telephones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 11, the device 1100 comprises a computing unit 1101, which may perform various appropriate actions and processes according to a computer program stored in a Read Only Memory (ROM) 1102 or a computer program loaded from a storage unit 1108 into a Random Access Memory (RAM) 1103. In the RAM 1103, various programs and data necessary for the operation of the device 1100 may also be stored. The calculation unit 1101, the ROM 1102, and the RAM 1103 are connected to each other by a bus 1104. An input/output (I/O) interface 1105 is also connected to bus 1104.
A number of components in device 1100 connect to I/O interface 1105, including: an input unit 1106 such as a keyboard, a mouse, and the like; an output unit 1107 such as various types of displays, speakers, and the like; a storage unit 1108, such as a magnetic disk, optical disk, or the like; and a communication unit 1109 such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 1109 allows the device 1100 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The computing unit 1101 can be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of the computing unit 1101 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 1101 performs the respective methods and processes described above, such as the processing method of the sensor information. For example, in some embodiments, the processing of sensor information may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 1108. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 1100 via ROM 1102 and/or communication unit 1109. When the computer program is loaded into the RAM 1103 and executed by the computing unit 1101, one or more steps of the processing method of sensor information described above may be performed. Alternatively, in other embodiments, the computing unit 1101 may be configured to perform the processing method of the sensor information by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server with a combined blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel or sequentially or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (19)

1. A method of processing sensor information, comprising:
acquiring configuration information of a sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor;
displaying an imaging simulation diagram of the sensor in a target scene, wherein the imaging simulation diagram comprises an image of a checking reference object; the imaging simulation diagram is generated based on the calibration parameters and virtual scene data of the target scene, and the virtual scene data comprises three-dimensional data of the check reference object.
2. The method of claim 1, further comprising:
acquiring configuration information of a checking parameter; the configuration information of the checking parameters is used for determining the three-dimensional data of the checking reference object corresponding to the checking parameters, so as to obtain the virtual scene data based on the three-dimensional data of the checking reference object and the three-dimensional data of the target scene.
3. The method of claim 1 or 2, further comprising:
determining a sensor profile based on the configuration information of the sensor; wherein the sensor configuration file is used for characterizing calibration parameters of the sensor;
generating the imaging simulation diagram based on a scene configuration file and the sensor configuration file; wherein the scene profile is used to characterize the virtual scene data.
4. The method of claim 1 or 2, wherein said displaying an imaging simulation of said sensor comprises:
in response to a selection operation for an ith display mode of N display modes, displaying an imaging simulation diagram of the sensor in the target scene corresponding to the ith display mode; the imaging simulation diagram corresponding to the ith display mode is obtained based on the calibration parameters and the virtual scene data corresponding to the ith display mode, N is an integer greater than or equal to 2, and i is a positive integer less than or equal to N.
5. The method of claim 4, further comprising:
obtaining a first scene map of the target scene based on a panoramic image acquired by a panoramic camera in the target scene;
obtaining virtual scene data corresponding to the jth display mode in the N display modes based on the first scene map; wherein j is a positive integer less than or equal to N.
6. The method of claim 4, further comprising:
splicing at least two images acquired from different directions by at least two cameras in the target scene to obtain a second scene map of the target scene;
obtaining virtual scene data corresponding to the kth display mode in the N display modes based on the second scene map; wherein k is a positive integer less than or equal to N.
7. The method of claim 1 or 2, further comprising:
acquiring vehicle speed information;
responding to the vehicle speed information, and displaying a driving simulation video of the sensor; the driving simulation video comprises M imaging simulation graphs corresponding to M driving moments respectively, wherein M is an integer greater than or equal to 2.
8. The method of claim 7, further comprising:
determining external parameters of the sensor at each of the M driving moments based on the vehicle speed information and the configuration information of the sensor;
and obtaining an imaging simulation diagram corresponding to each driving moment based on the internal parameters of the sensor, the external parameters of each driving moment and the virtual scene data.
9. A sensor information processing apparatus comprising:
the configuration information acquisition module is used for acquiring configuration information of the sensor; the configuration information of the sensor is used for determining the calibration parameters of the sensor;
the image display module is used for displaying an imaging simulation image of the sensor in a target scene, and the imaging simulation image comprises an image of a checking reference object; the imaging simulation diagram is generated based on the calibration parameters and virtual scene data of the target scene, and the virtual scene data comprises three-dimensional data of the check reference object.
10. The apparatus of claim 9, the configuration information obtaining module further configured to:
acquiring configuration information of a checking parameter; the configuration information of the checking parameters is used for determining the three-dimensional data of the checking reference object corresponding to the checking parameters, so as to obtain the virtual scene data based on the three-dimensional data of the checking reference object and the three-dimensional data of the target scene.
11. The apparatus of claim 9 or 10, further comprising:
the configuration information processing module is used for determining a sensor configuration file based on the configuration information of the sensor; wherein the sensor configuration file is used for characterizing calibration parameters of the sensor;
the first image generation module is used for generating the imaging simulation diagram based on a scene configuration file and the sensor configuration file; wherein the scene profile is used to characterize the virtual scene data.
12. The apparatus according to claim 9 or 10, wherein the image display module is specifically configured to:
in response to a selection operation for an ith display mode of N display modes, displaying an imaging simulation diagram of the sensor in the target scene corresponding to the ith display mode; the imaging simulation diagram corresponding to the ith display mode is obtained based on the calibration parameters and the virtual scene data corresponding to the ith display mode, N is an integer greater than or equal to 2, and i is a positive integer less than or equal to N.
13. The apparatus of claim 12, further comprising:
the first image processing module is used for obtaining a first scene map of the target scene based on a panoramic image acquired by a panoramic camera in the target scene;
a first map processing module, configured to obtain, based on the first scene map, virtual scene data corresponding to a jth display mode of the N display modes; wherein j is a positive integer less than or equal to N.
14. The apparatus of claim 12, further comprising:
the second image processing module is used for splicing at least two images acquired from different directions by at least two cameras in the target scene to obtain a second scene map of the target scene;
a second map processing module, configured to obtain, based on the second scene map, virtual scene data corresponding to a kth display mode of the N display modes; wherein k is a positive integer less than or equal to N.
15. The apparatus of claim 9 or 10, further comprising:
the vehicle speed information acquisition module is used for acquiring vehicle speed information;
the video display module is used for responding to the vehicle speed information and displaying the driving simulation video of the sensor; the driving simulation video comprises M imaging simulation graphs corresponding to M driving moments respectively, wherein M is an integer greater than or equal to 2.
16. The apparatus of claim 15, further comprising:
the vehicle speed information processing module is used for determining external parameters of the sensor at each driving moment in the M driving moments based on the vehicle speed information and the configuration information of the sensor;
and the second image generation module is used for obtaining an imaging simulation diagram corresponding to each driving moment based on the internal parameters of the sensor, the external parameters of each driving moment and the virtual scene data.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
19. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1-8.
CN202211423391.9A 2022-11-15 2022-11-15 Sensor information processing method and device, electronic equipment and storage medium Pending CN115727873A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211423391.9A CN115727873A (en) 2022-11-15 2022-11-15 Sensor information processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211423391.9A CN115727873A (en) 2022-11-15 2022-11-15 Sensor information processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115727873A true CN115727873A (en) 2023-03-03

Family

ID=85295896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211423391.9A Pending CN115727873A (en) 2022-11-15 2022-11-15 Sensor information processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115727873A (en)

Similar Documents

Publication Publication Date Title
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN109118542B (en) Calibration method, device, equipment and storage medium between laser radar and camera
CN111324945B (en) Sensor scheme determining method, device, equipment and storage medium
US9437034B1 (en) Multiview texturing for three-dimensional models
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
CN115578433B (en) Image processing method, device, electronic equipment and storage medium
EP3904829B1 (en) Method and apparatus for generating information, device, medium and computer program product
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN114140759A (en) High-precision map lane line position determining method and device and automatic driving vehicle
CN114565908A (en) Lane line detection method and device, electronic device and storage medium
CN111612851B (en) Method, apparatus, device and storage medium for calibrating camera
CN113901998A (en) Model training method, device, equipment, storage medium and detection method
CN113344906A (en) Vehicle-road cooperative camera evaluation method and device, road side equipment and cloud control platform
CN113378605B (en) Multi-source information fusion method and device, electronic equipment and storage medium
CN115575931A (en) Calibration method, calibration device, electronic equipment and storage medium
CN114757824B (en) Image splicing method, device, equipment and storage medium
CN115727873A (en) Sensor information processing method and device, electronic equipment and storage medium
CN113470161B (en) Illumination determination method for volume cloud in virtual environment, related equipment and storage medium
CN116188587A (en) Positioning method and device and vehicle
CN111866493B (en) Image correction method, device and equipment based on head-mounted display equipment
CN114943805A (en) Parking occlusion determination method, device, equipment, storage medium and program product
CN114266876A (en) Positioning method, visual map generation method and device
CN115294234B (en) Image generation method and device, electronic equipment and storage medium
CN116485906B (en) Parameter processing method, device, equipment and storage medium
CN113312979B (en) Image processing method and device, electronic equipment, road side equipment and cloud control platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination