CN114549276A - A on-vehicle chip for intelligent automobile - Google Patents

A on-vehicle chip for intelligent automobile Download PDF

Info

Publication number
CN114549276A
CN114549276A CN202210151855.9A CN202210151855A CN114549276A CN 114549276 A CN114549276 A CN 114549276A CN 202210151855 A CN202210151855 A CN 202210151855A CN 114549276 A CN114549276 A CN 114549276A
Authority
CN
China
Prior art keywords
scene
dimensional
unit
vehicle
driving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210151855.9A
Other languages
Chinese (zh)
Inventor
杨萍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Weiante Electronics Co ltd
Original Assignee
Shenzhen Weiante Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Weiante Electronics Co ltd filed Critical Shenzhen Weiante Electronics Co ltd
Priority to CN202210151855.9A priority Critical patent/CN114549276A/en
Publication of CN114549276A publication Critical patent/CN114549276A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/04Texture mapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Remote Sensing (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Traffic Control Systems (AREA)
  • Optical Radar Systems And Details Thereof (AREA)

Abstract

The invention provides a vehicle-mounted chip for an intelligent automobile, which is used for scene detection during the running of the automobile and is characterized by comprising the following components: a first scene measurement processor: the system comprises a signal processing module, a signal processing module and a signal processing module, wherein the signal processing module is used for receiving a return signal of a laser radar on an intelligent automobile and generating a global three-dimensional environment scene; a second scene measurement processor: the system comprises a camera device, a global three-dimensional environment scene and a local three-dimensional environment scene, wherein the camera device is used for receiving video data of the camera device on the intelligent automobile, determining the local three-dimensional scene around the intelligent automobile and marking the local scene in the global three-dimensional environment scene; and the main controller is used for drawing a driving map according to the global three-dimensional environment scene marked by the local scene and controlling the intelligent automobile to drive based on the driving map.

Description

A on-vehicle chip for intelligent automobile
Technical Field
The invention relates to the field of vehicle chips, in particular to a vehicle-mounted chip for an intelligent automobile.
Background
The more and more the month the unmanned vehicle gets in the automobile field, for example, the unmanned vehicle and the automatic driving vehicle; the technology is mature, and the vehicle can autonomously drive and execute tasks according to the requirements.
In the prior art, for example, in the delivery field, unmanned equipment has realized that the unmanned equipment can automatically plan a path according to an order, and the whole process from taking goods to delivering goods is completed.
However, in the event of an accident, the unmanned device is often difficult to resolve by itself. For example, in the area of warehouse logistics, the drive system of the drone fails or breaks down, resulting in an inability to continue driving. Or, a problem arises with the power management system of the drone, resulting in the amount of power not being able to support the completion of the task, and so on.
In the prior art, when unmanned equipment needs to rely on different sensing equipment and shooting equipment for scene acquisition, the mainstream technology at present is the laser radar and camera shooting technology.
However, in the prior art, although the detection of the lidar technology is clear, the reality cannot be improved when the surrounding scene is modeled, and secondly, the weather has influence on the lidar, so that the effect of the lidar is attenuated. The shooting equipment can not achieve the technical effect of the laser radar. Therefore, the driver is not driving and is not good.
Disclosure of Invention
The invention provides a vehicle-mounted chip for an intelligent automobile, which is used for solving the problem that in the prior art, although the detection of a laser radar technology is clear, the reality cannot be improved when a convenient scene is modeled. The shooting equipment can not achieve the technical effect of the laser radar. The effect of the unmanned technique is not good.
An on-board chip for an intelligent automobile, which is used for scene detection during vehicle driving, comprises:
a first scene measurement processor: the system comprises a laser radar, a signal processing unit and a signal processing unit, wherein the laser radar is used for receiving a return signal of a laser radar on an intelligent automobile and generating a global three-dimensional environment scene;
a second scene measurement processor: the system comprises a camera device, a global three-dimensional environment scene and a local three-dimensional environment scene, wherein the camera device is used for receiving video data of the camera device on the intelligent automobile, determining the local three-dimensional scene around the intelligent automobile and marking the local scene in the global three-dimensional environment scene;
and the main controller is used for drawing a driving map according to the global three-dimensional environment scene marked by the local scene and controlling the intelligent automobile to drive based on the driving map.
Preferably, the first scene measurement processor comprises:
an illumination source control unit: the device is used for transmitting a measuring pulse by an illumination source when being connected with an external power supply, and driving different laser radar measuring channels to receive reflected light from a three-dimensional environment;
an area dividing unit: the system comprises a light source, a light source module and a display module, wherein the light source is used for reflecting light of different elements; wherein the content of the first and second substances,
the scene elements include at least: vehicles, traffic signs, pedestrians, roadside buildings, median greening zones;
the element marking unit is used for marking each scene element one by one after the scene elements are generated by a user and determining global scene information;
three-dimensional environment scene generation unit: the system is used for calibrating a three-dimensional coordinate system of the position of the reflected light according to the global scene information and establishing a global three-dimensional environment scene after calibration; wherein, the first and the second end of the pipe are connected with each other,
the global three-dimensional environment scene comprises a mark position frame of each scene element and a posture position frame of the intelligent automobile of the user;
and updating the adjacent distance length in real time by the mark position frame and the attitude position frame.
Preferably, the first scene measurement processor further comprises the scene measurement steps of:
step 1: corresponding the reflected light to each scene element to construct a first model of a target scene;
wherein the first model comprises three-dimensional terrain, and scene environment;
step 2: determining element information of each scene element according to preset set laser radar imaging simulation parameters; wherein the content of the first and second substances,
the element information comprises a position, a detection distance, a field of view and a resolution;
and step 3: generating a point cloud texture with texture resolution, transparency and color by a process texture algorithm based on a process texture technology;
and 4, step 4: generating a first texture model of each scene model according to the point cloud texture;
and 5: performing point cloud color matching according to the first texture model, and generating a point cloud image of a scene element according to the point cloud color matching; wherein the content of the first and second substances,
the point cloud image is a partial scene image corresponding to each scene element.
Preferably, the generating the point cloud image further comprises:
step 51: determining a splicing area of each scene element according to the point cloud image;
step 52: extracting point cloud information according to the point cloud image, and determining each point cloud data; wherein, the first and the second end of the pipe are connected with each other,
the point cloud data comprises GPS information of each point and a height value from the ground;
step 53: determining boundary points of the point cloud data according to the area to be spliced; wherein the content of the first and second substances,
the boundary point is point data with the height value from the ground equal to a preset height value;
step 54: determining splicing coordinates of points to be spliced from the splicing region;
step 55: according to the splicing coordinates, forming a three-dimensional point cloud image through point cloud data of the splicing area; wherein the content of the first and second substances,
each point cloud data comprises position coordinate information;
step 56: according to the position coordinate information of the points to be spliced, matching and identifying boundary points on two sides of the area to be spliced;
and 57: and splicing the point cloud images of the adjacent scene elements according to the matching identification.
Preferably, the second scene measurement processor includes:
a video receiving unit: the system comprises a video data acquisition unit, a key image acquisition unit, a key body acquisition unit, a key image acquisition unit and a video display unit, wherein the video data acquisition unit is used for carrying out scene element identification on the video data, calibrating scene elements corresponding to the scene elements and the key image frames, and acquiring at least one key body position and a target area position in the key image frames through a scene calibration video; wherein the content of the first and second substances,
the key main body position is the real-time position of scene elements in the key image frame;
the target area position is the real-time area and the real-time position of the adjacent scene element real-time area in the key image frame;
a local scene determination unit: according to the scene calibration video, determining a local three-dimensional scene with scene element fusion with the global three-dimensional environment scene;
a scene labeling unit: according to the local three-dimensional scene, calibrating adjacent scene elements in the global three-dimensional environment scene; wherein the content of the first and second substances,
the adjacent scene element calibration is carried out on the scene element calibration through a calibration frame;
a calibration conversion unit: the system comprises a calibration frame, a scene element conversion module, a data processing module and a display module, wherein the calibration frame is used for carrying out scene calibration at each moment;
a comparison unit: and the distance determining unit is used for determining the distance of the adjacent scene elements in the global three-dimensional environment scene in real time according to the adjacent elements.
Preferably, the second scene measurement processor further comprises:
a weight presetting unit: the method comprises the steps of presetting three-dimensional weights of different scene elements;
a weight identification unit: the device comprises a three-dimensional weight determining unit, a three-dimensional weight determining unit and a display unit, wherein the three-dimensional weight determining unit is used for obtaining three-dimensional characteristics of scene elements corresponding to video data and determining a three-dimensional weight based on the three-dimensional characteristics;
a pretreatment unit: the system is used for respectively preprocessing the three-dimensional features and the three-dimensional weights to obtain a feature value matrix and a weight value matrix; wherein the content of the first and second substances,
the pretreatment comprises the following steps: mapping each scene element in the global three-dimensional environment scene through the three-dimensional features and the three-dimensional weights, and constructing a feature value matrix and a weight value matrix through mapping values of element mapping
Scene element determination unit: the system comprises a feature value matrix, a weight value matrix and a plurality of three-dimensional pulse arrays, wherein the feature value matrix and the weight value matrix are input into the three-dimensional pulse arrays for parallel calculation, and the importance of different scene elements is determined; wherein the content of the first and second substances,
and calibrating the scene elements with different importance degrees through different calibration frames.
Preferably, the second scene measurement processor further comprises:
an information output unit: the system comprises a video data acquisition unit, a display screen and a display unit, wherein the video data acquisition unit is used for acquiring video data of a user vehicle; wherein the content of the first and second substances,
the target information includes: vehicle position information, vehicle number information, vehicle type information, and vehicle distance information;
a rendering unit: rendering the adjacent vehicles based on the recognition result, and improving the recognition colors of different types of vehicles; wherein the content of the first and second substances,
the identification color is
A video output unit: and the video data is rendered according to the identification color, and the rendered video data is output through a user vehicle.
Preferably, the main controller includes:
a map drawing module: the real-time driving map of the user vehicle is established according to the first driving information and the second driving information;
a travel control unit: the real-time driving map is used for determining the driving speed and the driving direction of the front and rear vehicles of the user vehicle according to the real-time driving map, and controlling the user vehicle to automatically drive according to the driving speed and the driving direction of the front and rear vehicles.
Preferably, the main controller further comprises:
a position determination unit: determining a real-time vehicle position of the autonomous vehicle according to the driving map;
an area detection unit: the system is used for detecting whether the automatic driving vehicle runs to a detection area or not according to the real-time vehicle position;
the detection area includes: the system comprises a vehicle running channel, a deceleration area, a traffic light area, a pedestrian crossing area, a barrier gate area, a stop limiting area and a steering detection area; wherein, the first and the second end of the pipe are connected with each other,
if the unmanned vehicle is detected to run to a detection area, judging whether the automatic driving vehicle has driving errors in the detection area;
and if the unmanned vehicle has a driving error in the detection area, recording error information of the driving error of the automatic driving vehicle.
Preferably, the main controller further comprises:
a determination unit: judging vehicle units in a laser radar detection area according to the global three-dimensional environment scene;
a traffic condition reminding unit: the method is used for determining the density of the adjacent vehicles according to the number of the vehicle units and determining the traffic congestion state according to the density of the adjacent vehicles.
The invention has the beneficial effects that: the invention not only solves the defect of unmanned driving in two aspects of laser radar and camera shooting, but also provides a novel unmanned driving mode, namely a three-dimensional global scene and local scene marking, firstly, the picture display is clearer when the unmanned driving is carried out, secondly, a dynamic map is generated, and compared with the prior art, the automatic navigation technology and the unmanned driving technology are combined.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention.
In the drawings:
FIG. 1 is a chip composition diagram of a vehicle chip for an intelligent vehicle according to an embodiment of the present invention;
FIG. 2 is a block diagram of a first scenario measurement processor in an embodiment of the present invention;
FIG. 3 is a block diagram of a second scenario measurement processor in an embodiment of the present invention;
fig. 4 is a block diagram of the main controller according to the embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
Example 1:
as shown in fig. 1, the present invention is a vehicle-mounted chip for an intelligent vehicle, which is used for scene detection during vehicle driving, and comprises:
a first scene measurement processor: the system comprises a signal processing module, a signal processing module and a signal processing module, wherein the signal processing module is used for receiving a return signal of a laser radar on an intelligent automobile and generating a global three-dimensional environment scene;
in the prior art, most vehicles are used based on reflected light when being implemented, rapid modeling of surrounding scenes is achieved through the reflected light, but the modeling can be influenced by weather, and under the condition of rain, snow or fog with bad weather, the light of the laser radar can be attenuated, so that the effect of the laser radar is utilized in the prior art, and certain defects exist. In the prior art, a vehicle is also provided with a camera to assist a laser radar in unmanned driving; on the other hand, when the data volume of the laser radar is too large, downtime is easy to occur or the command issuing speed is delayed. In addition, in the prior art, the adopted scheme is multi-target tracking of the laser radar (the multi-target tracking indicates that data processing is carried out on a plurality of targets at one time), so the technical scheme adopted in the prior art mainly combines a camera shooting technology and a target tracking technology of the laser radar through data processing on a software layer, the camera shooting technology and the target tracking technology belong to data fusion, after the laser radar carries out target tracking, the camera shooting recognition of a vehicle is carried out to recognize whether the vehicle exists or not, and further automatic driving is realized. This has resulted in the inherent problems of lidar. The invention is characterized in that the chip which is specially used for combining the laser radar and the camera shooting identification is used for processing the defects of the combination of different technologies at the software level. The first scene measurement processor does not adopt a multi-target tracking technology, and carries out environment modeling based on the laser radar to generate a three-dimensional environment scene. Only the scene of the three-dimensional environment is generated, and under the condition of not carrying out multi-target tracking, smaller data volume processing can be realized, and the problem of downtime or delayed instruction issuing speed can not occur.
A second scene measurement processor: the system comprises a camera device, a global three-dimensional environment scene and a local three-dimensional environment scene, wherein the camera device is used for receiving video data of the camera device on the intelligent automobile, determining the local three-dimensional scene around the intelligent automobile and marking the local scene in the global three-dimensional environment scene;
the second scene measurement processor is mainly used for processing video data acquired by a camera device on the automobile, and only local three-dimensional scenes only around the automobile, namely vehicles, people and other factors influencing the unmanned driving of the automobile, are determined on the video data. In the unmanned driving process, the main purpose of the vehicle is to automatically avoid surrounding vehicles, automatically determine a route and automatically comply with various traffic rules. In the unmanned technology, the video recognition has the advantages that the traffic signs can be better recognized, and vehicles or other scene elements close to the distance in the unmanned process can be recognized more accurately and more truly. However, in the unmanned technology, if data fusion is required, a point cloud technology is mainly adopted at present, but when the point cloud technology is adopted, the accuracy rate of complex target detection is not enough, and video recognition needs recognition training based on a neural network, and the training needs a large amount of accurate data support, and is also a technology combining laser radar and visual recognition from the surface of a software algorithm. Therefore, the second scene measurement processor only processes local visual data, and therefore, the method adopts a local scene marking mode after acquiring local video data, so that an automobile which is far away from the automobile and cannot influence the driving of the automobile or other elements which are close to the automobile are distinguished by marking other scene elements which influence the driving of the automobile, wherein the distinguishing purpose is that a driving map is finally generated.
And the main controller is used for drawing a driving map according to the global three-dimensional environment scene marked by the local scene and controlling the intelligent automobile to drive based on the driving map.
The driving map is similar to our navigation map, but because the driving map is more prone to the aspect of unmanned driving, the driving map of the invention is more prone to the control of distance and the rapid planning of paths, so the driving map is a scene map based on distance marking and real-time dynamic change. Compared with the prior art, the invention has the advantages that when the vehicle is unmanned, the deep optic nerve is easily attacked by a countermeasure sample, for example, in the field of unmanned driving, the diagnosis of the signal by the laser radar is completely wrong once the traffic signal indicator is damaged or some hands and feet are made. However, the invention does not combine the two technologies, and is combined from the field of chip processing, which is equivalent to combining the two technologies from the level of data acquisition rather than fusing the two technologies from the level of data processing by personalized customization, so that the invention is not easy to deeply visualize nerves and cause accidents.
The principle of the technical scheme is as follows: the invention combines three technologies, a laser radar technology obtains a global three-dimensional scene, then a camera device on an automobile obtains a short-distance scene and elements, the distance between adjacent vehicles or other adjacent elements is analyzed through local marking of the three-dimensional scene and the short-distance scene, then a main controller controls the vehicles to run, and a running map is automatically drawn, which is a dynamic map.
The beneficial effects of the above technical scheme are that: the invention not only solves the defect of unmanned driving in two aspects of laser radar and camera shooting, but also provides a novel unmanned driving mode, namely a novel unmanned driving mode is provided, namely a three-dimensional global scene is added with local scene marks, firstly, the picture display is clearer when the unmanned driving is carried out, and secondly, a dynamic map is generated, and compared with the prior art, the automatic navigation technology and the unmanned driving technology are combined.
Example 2:
preferably, as shown in fig. 2, the first scene measurement processor includes:
an illumination source control unit: the device is used for transmitting a measuring pulse by an illumination source when being connected with an external power supply, and driving different laser radar measuring channels to receive reflected light from a three-dimensional environment;
the illumination source is the laser emission source of the laser radar, and the invention needs the laser radar technology, so the invention carries out special control on the illumination source, and the invention provides the following steps: the reason why different laser radar measuring channels are driven to receive reflected light from a three-dimensional environment is that the invention does not only provide one receiving channel of laser reflected light, which is to acquire scene information to the maximum extent.
An area dividing unit: the system comprises a light source, a light source module and a control module, wherein the light source is used for receiving reflected light of different elements; wherein the content of the first and second substances,
the scene elements include at least: vehicles, traffic signs, pedestrians, roadside buildings, median greening zones;
the lidar determines the scene elements by reflecting light, so that different areas are determined by different reflected light, different scene elements are determined by different areas, for example, trees and flowers in green areas, vehicles and pedestrians in street areas, traffic signs on roadside, and the like.
The element marking unit is used for marking each scene element one by one after the scene elements are generated by a user and determining global scene information; the mark of the scene element is a mark for distinguishing different types of scene elements, can be a color division type, and can also be used for directly marking different scene elements with characters so as to further mark detectable areas.
Three-dimensional environment scene generation unit: the system is used for calibrating a three-dimensional coordinate system of the position of the reflected light according to the global scene information and establishing a global three-dimensional environment scene after calibration; wherein the content of the first and second substances,
in the invention, the three-dimensional coordinate system calibration refers to the position calibration of each scene element through the three-dimensional coordinate system.
The global three-dimensional environment scene comprises a mark position frame of each scene element and a posture position frame of the intelligent automobile of the user; when the position is calibrated, the invention adopts a mode of calibrating the frame, namely, the adjacent distance between different marked position frames is judged by one frame according to the length, the width and the height of the scene element.
And updating the adjacent distance length in real time by the mark position frame and the attitude position frame.
The principle of the technical scheme is as follows: the organ radar divides scene elements including vehicles, traffic signs, pedestrians, road test buildings and road greening zones by reflecting light in a three-dimensional environment; and then, constructing a three-dimensional panoramic scene based on the three-dimensional coordinate system through the elements, and realizing the labeling of different elements, namely different elements in different automobiles and environments through the marking frame.
The beneficial effects of the above technical scheme are that: the invention can establish the scene picture of the three-dimensional panorama, and is more in line with the actual scene compared with the prior art.
Example 3:
preferably, the first scene measurement processor further comprises:
step 1, corresponding the reflected light to each scene element to construct a first model of a target scene;
the first model is a database model, which is used for arranging scene elements and reflected light, and the reflected light of the corresponding area of each element corresponds to the scene elements.
Wherein the first model comprises three-dimensional terrain, and scene environment;
step 2, determining element information of each scene element according to preset set laser radar imaging simulation parameters; wherein the content of the first and second substances,
the element information comprises position, detection distance, field of view and resolution;
the simulation parameters are used for identification, so it is not clear that different simulation parameters are set for generated scene elements, and each piece of data has specific information, thereby facilitating extraction of element information.
Step 3, generating point cloud textures with texture resolution, transparency and color through a process texture algorithm based on a process texture technology; the process texture technology is a method in the point cloud technology, but the method is different from the point cloud processing in the prior art, because the method acquires texture resolution, transparency and color, and is not the resolution, transparency and color of the whole scene image, because the first model of the method is based on the laser radar, the real image cannot be acquired, and the acquired texture is based on light reflection, and compared with the prior art, the point cloud texture is more in line with the laser radar for three-dimensional modeling.
Step 4, generating a first texture model of each scene model according to the point cloud texture;
and 5, performing point cloud color matching according to the first texture model, and generating a point cloud image of the scene element according to the point cloud color matching. The color obtained based on the point cloud texture is different from the color in the actual scene, and the color cannot be directly obtained, so that the point cloud color matching is carried out, a more real point cloud image is obtained, and the purpose of generating the point cloud image is to prevent scene elements from not being included in a driving map when three-dimensional modeling is carried out.
The principle of the technical scheme is as follows: according to the invention, three-dimensional terrain, ground features and scene environment are constructed through the reflected light of the scene elements, then what the elements are tasted out is determined through setting laser radar imaging simulation parameters, and finally, the color matching of the scene elements is realized through process texture technology and power supply texture, so that the scene elements are displayed in a more real picture.
Example 4:
preferably, the generating the point cloud image further comprises:
step 51: determining a splicing area of each scene element according to the point cloud image;
because the reflected light of the laser radar has a plurality of channels of reflected light, the point cloud image obtained by the reflected light is an area, and then another area is obtained, the reflected light of each area is mostly different, and each area has certain scene elements, so the invention can carry out the splicing of the whole scene through the splicing area.
Step 52: extracting point cloud information according to the point cloud image, and determining each point cloud data; wherein the content of the first and second substances,
the point cloud data comprises GPS information of each point and a height value from the ground;
the area to be spliced is surrounded by boundary points, and the boundary points are point data with the height value from the ground equal to a preset height value; the boundary point is the boundary of the region formed by the reflected light acquired by each reflected light channel of the laser radar.
Step 53: determining boundary points of the point cloud data according to the area to be spliced; wherein the content of the first and second substances,
the boundary point is point data with the height value from the ground equal to a preset height value;
step 54: determining splicing coordinates of points to be spliced from the splicing area;
the stitching coordinates are the coordinates of the boundary points of the stitching area.
Step 55: according to the splicing coordinates, forming a three-dimensional point cloud image through point cloud data of the splicing area; wherein the content of the first and second substances,
each point cloud data comprises position coordinate information;
step 56: according to the position coordinate information of the points to be spliced, matching and identifying boundary points on two sides of the area to be spliced; since the coordinates of the boundary points of the adjacent regions are the same at the time of stitching, this matching assignment is a matching assignment based on the position coordinates.
And 57: and splicing the point cloud images of the adjacent scene elements according to the matching identification.
The principle of the technical scheme is as follows: the generated point cloud image may be a plurality of scenes, and the positions and heights of all scene elements are different, so that the scenes need to be spliced, a larger scene area is displayed, and the global scene is determined. The effect of the concatenation is to remove the interference information. The reason for deleting the interference information is that the reflected light of the laser radar is repeated in the splicing process, and the repeated reflected light belongs to the interference information and also has the reflected light of some non-nearby scene elements; therefore, each scene element is only in one position area and has fixed position coordinates in a splicing mode and a coordinate mode, so that only one repeated coordinate is taken during splicing, and the interference information is deleted.
Example 5:
preferably, as shown in fig. 3, the second scene measurement processor includes:
a video receiving unit: the system comprises a video data acquisition unit, a key image acquisition unit, a key body acquisition unit, a key image acquisition unit and a video display unit, wherein the video data acquisition unit is used for carrying out scene element identification on the video data, calibrating scene elements corresponding to the scene elements and the key image frames, and acquiring at least one key body position and a target area position in the key image frames through a scene calibration video; wherein the content of the first and second substances,
the key main body position is the real-time position of scene elements in the key image frame;
the target area position is the real-time area and the real-time position of the adjacent scene element real-time area in the key image frame;
the key subject position and the target area position are the positions of the vehicles near the vehicle when the vehicle is not driven, and the coordinates of the positions of the upper, lower, left and right are the same. The target area position is then the position of the adjacent vehicle, which is the position to be calibrated by the calibration frame.
A local scene determination unit: according to the scene calibration video, determining a local three-dimensional scene with scene element fusion with the global three-dimensional environment scene;
scene element fusion, namely, the same vehicles exist, the same road, the same traffic sign and the same pedestrian are in the scene, the scene is the local three-dimensional scene, and the scene elements are fused, namely, the scene elements are the same.
A scene labeling unit: according to the local three-dimensional scene, calibrating adjacent scene elements in the global three-dimensional environment scene; wherein the content of the first and second substances,
the adjacent scene element calibration is carried out on the scene element calibration through a calibration frame;
and (4) adjacent scene element calibration, namely calibrating the scene elements of the adjacent vehicles, pedestrians, greening and the like which have obstacles to unmanned driving.
A calibration conversion unit: the system comprises a calibration frame, a scene element conversion module, a data processing module and a display module, wherein the calibration frame is used for carrying out scene calibration at each moment;
a comparison unit: and the distance determining unit is used for determining the distance of the adjacent scene elements in the global three-dimensional environment scene in real time according to the adjacent elements. The distance to the adjacent scene element is the distance to other cars, pedestrians or other driving obstacles adjacent to the unmanned car.
The principle of the technical scheme is as follows: the second scene measurement processor of the invention mainly processes video data so as to perform scene conversion and local scene labeling, and mainly determines adjacent elements, such as adjacent vehicles.
Example 6:
preferably, the second scene measurement processor further comprises:
a weight presetting unit: the method comprises the steps of presetting three-dimensional weights of different scene elements;
the three-dimensional weight is in the three-dimensional scene, and the importance degree is determined by various elements such as people, vehicles and other elements according to the proximity distance, the type of scene elements and the like. Because the target of unmanned driving is obstacle avoidance and the driving does not violate the traffic rules, the three-dimensional weight of the scene element is the operation difficulty of avoiding the scene element in the unmanned driving.
A pretreatment unit: the system is used for respectively preprocessing the three-dimensional features and the three-dimensional weights to obtain a feature value matrix and a weight value matrix; wherein the content of the first and second substances,
the pretreatment comprises the following steps: mapping each scene element in the global three-dimensional environment scene through the three-dimensional features and the three-dimensional weights, and constructing a feature value matrix and a weight value matrix through mapping values of element mapping;
scene element determination unit: the system comprises a feature value matrix, a weight value matrix and a plurality of three-dimensional pulse arrays, wherein the feature value matrix and the weight value matrix are input into the three-dimensional pulse arrays for parallel calculation, and the importance of different scene elements is determined; wherein the content of the first and second substances,
and calibrating the scene elements with different importance degrees through different calibration frames.
The importance degree of different scene elements, namely the importance degree of the scene elements in a three-dimensional scene, can be determined through the parallel pile-up calculation of the characteristic value matrix and the weight value matrix, namely the importance degree of the scene elements in the three-dimensional scene, namely the proportion of the scene elements in the three-dimensional scene in the whole scene can be determined through the characteristic matrix value, and in the practical implementation, the characteristic calculation only calculates the characteristic value matrix of obstacles such as vehicles, pedestrians and the like which need to be avoided. The weight value matrix is used to determine the importance of each scene element in the feature value matrix. Therefore, the final importance degree is obtained based on the parallel calculation of the three-dimensional pulse array, and the calibration is carried out.
In a specific embodiment, the three-dimensional weight is obtained by:
step 1: first, based on the scene elements possibly encountered, a scene element set C ═ C is constructed1,c2,c3……ci) N scene elements exist in the method, i belongs to n; these n scene elements include cars, traffic signs, pedestrians, and all other obstacles that may be present in an unmanned environment.
Step 2: some of these scene elements are very heavy in weight calculation, and some of them are very light, so for each scene element we will set the evaluation weight of different scene elements when it is not driving:
Figure BDA0003510704030000171
wherein, Q (c)i) An evaluation weight representing an ith scene element; c. CiAn element feature representing an ith scene element; p is a radical ofiAn element basis weight representing an ith scene element;
step 2 is mainly to determine the evaluation weight of each scene element, namely the weight ratio of each scene element in all scene elements in unmanned driving; in the calculation of this weight, the present invention first sets the base weight, i.e., p, for each scene elementi(ii) a The weight is a single element, and is also an artificially specified weight in the relevance of the unmanned field and the unmanned vehicle; the weight that each scene element should have in all scene elements is calculated in step 2, that is, in all scene elements;
and step 3: in step 3, it is required to combine the distance, direction, state (moving, fixing, and exploding (this state includes a danger level)), of each scene element relative to the unmanned vehicle, in a real-time three-dimensional scene, there are multiple states, and the degree of influence on the vehicle, in this calculation process, the calculation of the self state is not enough, and is represented by the following formula:
Figure BDA0003510704030000172
wherein, S (c)i) A three-dimensional weight representing an ith scene element; z is a linear or branched memberjThe ith state parameter represents the self unmanned vehicle; j is 1, 2, 3 … … k, k represents the general category of the unmanned vehicle own state parameter; h represents a reference state parameter; d is a radical ofiRepresenting a real-time distance of an ith scene element from the unmanned vehicle; d represents a reference distance between the scene element and the unmanned vehicle when the unmanned vehicle is driven; f. ofiThe real-time direction deviation (the direction parameter is embodied in an angle form, and the angle of F is a reference angle in actual real time) of the ith scene element from the unmanned vehicle is represented); f is the reference direction of the unmanned vehicle when the unmanned vehicle is driven; y isiRepresenting the state of the ith scene element.
S (c) calculated in step threei) The larger this value, the larger the value representing the three-bit weight. In this step, the present invention is in the state of the unmanned vehicle itself:
Figure BDA0003510704030000181
to multiply various parameters of scene elements in the three-dimensional scene to determine specific three-dimensional weights.
Figure BDA0003510704030000182
The time-distance weight represented, i.e., the degree of influence of the real-time distance on the unmanned driving;
Figure BDA0003510704030000183
the degree of influence of the real-time direction on the unmanned driving; y isiRepresenting the degree of influence of the real-time state of a scene element on unmanned drivingBy these three values, the three-dimensional weight of each scene element is determined.
Example 7:
preferably, the second scene measurement processor further comprises:
an information output unit: the system comprises a video data acquisition unit, a display screen and a display unit, wherein the video data acquisition unit is used for acquiring video data of a user vehicle; wherein the content of the first and second substances,
the preset target information includes: vehicle position information, vehicle number information, vehicle type information, and vehicle distance information;
when unmanned driving is performed, although the driver is not needed, the person in the vehicle still needs driving skills to perform driving, which requires displaying information around the vehicle, that is, preset target information.
A rendering unit: rendering the adjacent vehicles based on the recognition result, and improving the recognition colors of different types of vehicles; wherein the content of the first and second substances,
the same scene elements in the identification colors are the same in color.
The purpose of rendering is to distinguish different vehicles when an unmanned vehicle drives, so that the traffic state on the road can be accurately judged during driving, or the traffic state and obstacles influencing the driving of the vehicle can be judged.
A video output unit: and the video data is rendered according to the identification color, and the rendered video data is output through a user vehicle.
Example 8:
preferably, as shown in fig. 3, the main controller includes:
a map drawing module: the real-time driving map of the user vehicle is established according to the first driving information and the second driving information;
a travel control unit: the real-time driving map is used for determining the driving speed and the driving direction of the front and rear vehicles of the user vehicle according to the real-time driving map, and controlling the user vehicle to automatically drive according to the driving speed and the driving direction of the front and rear vehicles.
The principle and the beneficial effects of the technical scheme are as follows: when the real-time driving map is constructed, the driving information of the vehicle is divided into two parts, one part is the driving information of the vehicle itself, and the second part is the driving information of the adjacent vehicle, and the driving information can judge the driving path on the road. When the running control is performed, the first running information and the second running information can clarify all running data of the vehicle and the adjacent vehicle, so that the running direction and the proper running speed of the vehicle can be judged, and the running speed and the running direction of the adjacent vehicle can also be judged to judge the current running speed and the current running direction of the vehicle, namely how the unmanned vehicle adjusts the current running speed and the current running direction.
Example 9:
preferably, as shown in fig. 3, the main controller further includes:
a position determination unit: acquiring the position of the unmanned vehicle according to the driving map; when the driving map is actually implemented, two driving maps exist, one is the driving map in the whole range of the laser radar and video data sensing area, and the other is the driving map fused with the navigation map, so that the real-time position of the vehicle can be determined.
An area detection unit: the unmanned vehicle is used for detecting whether the unmanned vehicle runs to a detection area or not according to the position of the unmanned vehicle;
the detection area comprises a vehicle running channel, a deceleration area, a traffic light area, a pedestrian crossing area, a barrier gate area, a limit stop area and a detection steering area; wherein the content of the first and second substances,
if the fact that the unmanned vehicle runs to a detection area is detected, whether the unmanned vehicle has driving errors in the detection area is judged;
because the traffic rules need to be observed, the invention judges how the vehicle operates in the detection areas according to the traffic rules of the detection areas, and detects whether the vehicle has misoperation during operation, if the misoperation occurs, the defect exists in the unmanned control system, and the defect needs to be repaired.
And if the unmanned vehicle has a driving error in the detection area, recording error information of the driving error of the unmanned vehicle.
The principle of the technical scheme is as follows: the main controller of the present invention determines the area of the driving monitoring through the driving map, for example: the system comprises a vehicle running channel, a deceleration area, a traffic light area, a pedestrian crossing area, a barrier area, a stop limiting area and a steering detection area. Therefore, whether the unmanned driving has errors or not can be judged better.
Example 10:
preferably, the main controller further comprises:
a determination unit: judging vehicles in a laser radar detection area according to the global three-dimensional environment scene;
a traffic condition reminding unit: the method is used for determining the density of the adjacent vehicles according to the number of the vehicles and determining the traffic congestion state according to the density of the adjacent vehicles.
The principle of the technical scheme is as follows: according to the invention, the density of the adjacent vehicles can be determined according to the three-dimensional environment scene, so that the real-time traffic state can be judged.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. The utility model provides an on-vehicle chip for intelligent automobile for scene detection carries out in the vehicle is traveling, its characterized in that includes:
a first scene measurement processor: the system comprises a signal processing module, a signal processing module and a signal processing module, wherein the signal processing module is used for receiving a return signal of a laser radar on an intelligent automobile and generating a global three-dimensional environment scene;
a second scene measurement processor: the system comprises a camera device, a global three-dimensional environment scene and a local three-dimensional environment scene, wherein the camera device is used for receiving video data of the camera device on the intelligent automobile, determining the local three-dimensional scene around the intelligent automobile and marking the local scene in the global three-dimensional environment scene;
and the main controller is used for drawing a driving map according to the global three-dimensional environment scene marked by the local scene and controlling the intelligent automobile to drive based on the driving map.
2. The on-board chip for smart car of claim 1, wherein said first scene measurement processor comprises:
an illumination source control unit: the device is used for transmitting a measuring pulse by an illumination source when being connected with an external power supply, and driving different laser radar measuring channels to receive reflected light from a three-dimensional environment;
an area dividing unit: the system comprises a light source, a light source module and a control module, wherein the light source is used for receiving reflected light of different elements; wherein, the first and the second end of the pipe are connected with each other,
the scene elements include at least: vehicles, traffic signs, pedestrians, roadside buildings, median greening zones;
an element marking unit: after the scene elements are generated, a user marks each scene element one by one to determine global scene information;
three-dimensional environment scene generation unit: the system is used for calibrating a three-dimensional coordinate system of the position of the reflected light according to the global scene information and establishing a global three-dimensional environment scene after calibration; wherein the content of the first and second substances,
the global three-dimensional environment scene comprises a mark position frame of each scene element and a posture position frame of the intelligent automobile of the user;
and updating the adjacent distance length in real time by the mark position frame and the attitude position frame.
3. The on-board chip for an intelligent vehicle according to claim 1, wherein the first scenario measurement processor further comprises the following scenario measurement steps:
step 1: corresponding the reflected light to each scene element to construct a first model of a target scene;
wherein the first model comprises three-dimensional terrain, and scene environment;
step 2: determining element information of each scene element according to preset set laser radar imaging simulation parameters; wherein the content of the first and second substances,
the element information comprises position, detection distance, field of view and resolution;
and step 3: generating a point cloud texture with texture resolution, transparency and color by a process texture algorithm based on a process texture technology;
and 4, step 4: generating a first texture model of each scene model according to the point cloud texture;
and 5: performing point cloud color matching according to the first texture model, and generating a point cloud image of a scene element according to the point cloud color matching; wherein, the first and the second end of the pipe are connected with each other,
the point cloud image is a partial scene image corresponding to each scene element.
4. The on-board chip for smart car of claim 3, wherein said generating point cloud image further comprises:
step 51: determining a splicing area of each scene element according to the point cloud image;
step 52: extracting point cloud information according to the point cloud image, and determining each point cloud data; wherein the content of the first and second substances,
the point cloud data comprises GPS information of each point and a height value from the ground;
step 53: determining boundary points of the point cloud data according to the area to be spliced; wherein the content of the first and second substances,
the boundary point is point data with the height value from the ground equal to a preset height value;
step 54: determining splicing coordinates of points to be spliced from the splicing region;
step 55: according to the splicing coordinates, forming a three-dimensional point cloud image through point cloud data of the splicing area; wherein the content of the first and second substances,
each point cloud data comprises position coordinate information;
step 56: according to the position coordinate information of the points to be spliced, matching and identifying boundary points on two sides of the area to be spliced;
and 57: and splicing the point cloud images of the adjacent scene elements according to the matching identification.
5. The on-board chip for smart car of claim 1, wherein said second scene measurement processor comprises:
a video receiving unit: the system comprises a video processing unit, a key image frame and a video processing unit, wherein the video processing unit is used for carrying out scene element identification on the video data, calibrating scene elements corresponding to the scene elements and the key image frame, and acquiring at least one key main body position and a target area position in the key image frame through a scene calibration video; wherein the content of the first and second substances,
the key main body position is the real-time position of scene elements in the key image frame;
the target area position is the real-time area and the real-time position of the adjacent scene element real-time area in the key image frame;
a local scene determination unit: determining a local three-dimensional scene with scene element fusion with the global three-dimensional environment scene according to the scene calibration video;
a scene labeling unit: according to the local three-dimensional scene, calibrating adjacent scene elements in the global three-dimensional environment scene; wherein the content of the first and second substances,
the adjacent scene element calibration is carried out on the scene element calibration through a calibration frame;
a calibration conversion unit: the system comprises a calibration frame, a scene element conversion module, a data processing module and a display module, wherein the calibration frame is used for carrying out scene calibration at each moment;
a comparison unit: and the distance determining unit is used for determining the distance of the adjacent scene elements in real time in the global three-dimensional environment scene according to the adjacent elements.
6. The on-board chip for smart car of claim 5, wherein said second scene measurement processor further comprises:
a weight presetting unit: the method comprises the steps of presetting three-dimensional weights of different scene elements;
a weight identification unit: the device comprises a three-dimensional weight determining unit, a three-dimensional weight determining unit and a display unit, wherein the three-dimensional weight determining unit is used for obtaining three-dimensional characteristics of scene elements corresponding to video data and determining a three-dimensional weight based on the three-dimensional characteristics;
a pretreatment unit: the system is used for respectively preprocessing the three-dimensional features and the three-dimensional weights to obtain a feature value matrix and a weight value matrix; wherein the content of the first and second substances,
the pretreatment comprises the following steps: mapping each scene element in the global three-dimensional environment scene through the three-dimensional features and the three-dimensional weights, and constructing a feature value matrix and a weight value matrix through mapping values of element mapping;
scene element determination unit: the system comprises a feature value matrix, a weight value matrix and a plurality of three-dimensional pulse arrays, wherein the feature value matrix and the weight value matrix are input into the three-dimensional pulse arrays for parallel calculation, and the importance of different scene elements is determined; wherein, the first and the second end of the pipe are connected with each other,
and calibrating the scene elements with different importance degrees through different calibration frames.
7. The on-board chip for smart car of claim 1, wherein said second scene measurement processor further comprises:
an information output unit: the system comprises a video data acquisition unit, a display screen and a display unit, wherein the video data acquisition unit is used for acquiring video data of a user vehicle; wherein, the first and the second end of the pipe are connected with each other,
the target information includes: vehicle position information, vehicle number information, vehicle type information, and vehicle distance information;
a rendering unit: rendering the adjacent vehicles based on the recognition result, and improving the recognition colors of different types of vehicles; wherein the content of the first and second substances,
the same scene elements in the identification colors are the same in color;
a video output unit: and the video data is rendered according to the identification color, and the rendered video data is output through a user vehicle.
8. The on-board chip for the smart car according to claim 1, wherein the main controller comprises:
a map drawing module: the real-time driving map of the user vehicle is established according to the first driving information and the second driving information;
a travel control unit: the real-time driving map is used for determining the driving speed and the driving direction of the front and rear vehicles of the user vehicle according to the real-time driving map, and controlling the user vehicle to automatically drive according to the driving speed and the driving direction of the front and rear vehicles.
9. The on-board chip for smart car of claim 8, wherein said master controller further comprises:
a position determination unit: determining a real-time vehicle position of the autonomous vehicle according to the driving map;
an area detection unit: the system is used for detecting whether the automatic driving vehicle runs to a detection area or not according to the real-time vehicle position;
the detection area includes: the system comprises a vehicle running channel, a deceleration area, a traffic light area, a pedestrian crossing area, a barrier gate area, a stop limiting area and a steering detection area; wherein the content of the first and second substances,
if the unmanned vehicle is detected to run to a detection area, judging whether the automatic driving vehicle has driving errors in the detection area;
and if the unmanned vehicle has a driving error in the detection area, recording error information of the driving error of the automatic driving vehicle.
10. The on-board chip for smart car of claim 1, wherein said master controller further comprises:
a determination unit: judging vehicles in a laser radar detection area according to the global three-dimensional environment scene;
a traffic condition reminding unit: the method is used for determining the density of the adjacent vehicles according to the number of the vehicles and determining the traffic congestion state according to the density of the adjacent vehicles.
CN202210151855.9A 2022-02-18 2022-02-18 A on-vehicle chip for intelligent automobile Pending CN114549276A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210151855.9A CN114549276A (en) 2022-02-18 2022-02-18 A on-vehicle chip for intelligent automobile

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210151855.9A CN114549276A (en) 2022-02-18 2022-02-18 A on-vehicle chip for intelligent automobile

Publications (1)

Publication Number Publication Date
CN114549276A true CN114549276A (en) 2022-05-27

Family

ID=81675067

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210151855.9A Pending CN114549276A (en) 2022-02-18 2022-02-18 A on-vehicle chip for intelligent automobile

Country Status (1)

Country Link
CN (1) CN114549276A (en)

Similar Documents

Publication Publication Date Title
US11861790B2 (en) Procedural world generation using tertiary data
CN109100155B (en) Unmanned vehicle on-loop rapid simulation test system and method
US10628671B2 (en) Road modeling from overhead imagery
US11288521B2 (en) Automated road edge boundary detection
US11656620B2 (en) Generating environmental parameters based on sensor data using machine learning
US11248925B2 (en) Augmented road line detection and display system
CN111874006B (en) Route planning processing method and device
US10984557B2 (en) Camera calibration using traffic sign recognition
CN109931939A (en) Localization method, device, equipment and the computer readable storage medium of vehicle
CN109948661A (en) A kind of 3D vehicle checking method based on Multi-sensor Fusion
CN110531376A (en) Detection of obstacles and tracking for harbour automatic driving vehicle
US11861784B2 (en) Determination of an optimal spatiotemporal sensor configuration for navigation of a vehicle using simulation of virtual sensors
CN113673282A (en) Target detection method and device
CN114076956A (en) Lane line calibration method based on laser radar point cloud assistance
CN112382079A (en) Road side perception analog simulation method and system for vehicle-road cooperation
CN111127651A (en) Automatic driving test development method and device based on high-precision visualization technology
US20210180958A1 (en) Graphic information positioning system for recognizing roadside features and method using the same
US11961272B2 (en) Long range localization with surfel maps
CN112735253A (en) Traffic light automatic labeling method and computer equipment
JP2004265432A (en) Travel environment recognition device
US11485373B2 (en) Method for a position determination of a vehicle, control unit, and vehicle
CN113988197A (en) Multi-camera and multi-laser radar based combined calibration and target fusion detection method
CN111316324A (en) Automatic driving simulation system, method, equipment and storage medium
CN116597690B (en) Highway test scene generation method, equipment and medium for intelligent network-connected automobile
CN114549276A (en) A on-vehicle chip for intelligent automobile

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination