CN112101209B - Method and apparatus for determining world coordinate point cloud for roadside computing device - Google Patents

Method and apparatus for determining world coordinate point cloud for roadside computing device Download PDF

Info

Publication number
CN112101209B
CN112101209B CN202010966254.4A CN202010966254A CN112101209B CN 112101209 B CN112101209 B CN 112101209B CN 202010966254 A CN202010966254 A CN 202010966254A CN 112101209 B CN112101209 B CN 112101209B
Authority
CN
China
Prior art keywords
point cloud
world coordinate
coordinates
target
coordinate point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010966254.4A
Other languages
Chinese (zh)
Other versions
CN112101209A (en
Inventor
贾金让
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Apollo Zhilian Beijing Technology Co Ltd
Original Assignee
Apollo Zhilian Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Apollo Zhilian Beijing Technology Co Ltd filed Critical Apollo Zhilian Beijing Technology Co Ltd
Priority to CN202010966254.4A priority Critical patent/CN112101209B/en
Publication of CN112101209A publication Critical patent/CN112101209A/en
Application granted granted Critical
Publication of CN112101209B publication Critical patent/CN112101209B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/54Surveillance or monitoring of activities, e.g. for recognising suspicious objects of traffic, e.g. cars on the road, trains or boats
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/60Rotation of a whole image or part thereof
    • G06T3/604Rotation of a whole image or part thereof using a CORDIC [COordinate Rotation Digital Compute] device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The application discloses a method and a device for determining a world coordinate point cloud, and relates to the technical fields of image processing, intelligent transportation and automatic driving. The specific embodiment comprises the following steps: acquiring coordinates of target pixel points of obstacles in an image to be processed, which is shot by a target roadside camera; searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera; and determining the world coordinate point cloud of the obstacle in the world coordinate system based on the searched world coordinate point cloud. According to the method and the device, the world coordinate point cloud of the obstacle can be quickly and accurately found by utilizing the mapping relation between the coordinates of the two-dimensional image and the world coordinate point cloud, and the operation resources are effectively saved. Meanwhile, the problems of high cost and calculation result deviation caused by uneven ground in the prior art are avoided.

Description

Method and apparatus for determining world coordinate point cloud for roadside computing device
Technical Field
The application relates to the technical field of computers, in particular to the technical fields of image processing, intelligent transportation and automatic driving, and especially relates to a method and a device for determining a world coordinate point cloud.
Background
Because the representation form of the position of the obstacle in the image of the monocular camera is two-dimensional pixel coordinates, the obstacle in the actual three-dimensional space is three-dimensional coordinates, that is, a certain pixel point in the image corresponds to a ray in the three-dimensional world coordinate system, that is, the point on the ray in the world coordinate system is projected onto the image and is the same pixel.
In the related art, in order to determine world coordinates corresponding to pixel points in an image in a world coordinate system, a binocular camera is generally used, and this way has a high requirement on the cost of the device. In addition, the related technology can calculate the intersection point position of the ray of the pixel and the ground plane in real time to obtain world coordinates, and the mode has high requirements on the flatness of the ground.
Disclosure of Invention
Provided are a method, a device, an electronic device and a storage medium for determining a world coordinate point cloud of a road side computing device.
According to a first aspect, there is provided a method of determining a world coordinate point cloud, comprising: acquiring coordinates of target pixel points of obstacles in an image to be processed, which is shot by a target roadside camera; searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera; and determining the world coordinate point cloud of the obstacle in the world coordinate system based on the searched world coordinate point cloud.
According to a second aspect, there is provided an apparatus for determining a world coordinate point cloud, comprising: the acquisition unit is configured to acquire coordinates of target pixel points of the obstacle in the image to be processed, which is shot by the target roadside camera; the searching unit is configured to search the world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target road side camera; and a determining unit configured to determine a world coordinate point cloud of the obstacle in the world coordinate system based on the found world coordinate point cloud.
According to a third aspect, there is provided a method for a roadside computing device to determine a world coordinate point cloud, the method comprising: acquiring an image to be processed, which is shot by a target road side camera; acquiring coordinates of a target pixel point of an obstacle in an image to be processed; searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera; determining a world coordinate point cloud of the obstacle in a world coordinate system based on the searched world coordinate point cloud; and sending the determined world coordinate point cloud to a cloud control platform or a server.
According to a fourth aspect, there is provided an electronic device comprising: one or more processors; and a storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement a method as in any of the embodiments of the method of determining a world coordinate point cloud.
According to a fifth aspect, a computer readable storage medium is provided, on which a computer program is stored which, when executed by a processor, implements a method as any of the embodiments of the method of determining a world coordinate point cloud.
According to the scheme, the world coordinate point cloud of the obstacle can be rapidly and accurately determined by utilizing the mapping relation between the coordinates of the two-dimensional image and the world coordinate point cloud, and the operation resources are effectively saved. Meanwhile, the problems of high cost and calculation result deviation caused by uneven ground in the prior art are avoided.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 is an exemplary system architecture diagram in which some embodiments of the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method of determining a world coordinate point cloud according to the present application;
FIG. 3 is a schematic illustration of one application scenario of a method of determining a world coordinate point cloud according to the present application;
FIG. 4 is a flow chart of one embodiment of generating a mapping relationship according to the present application;
FIG. 5 is a schematic structural view of one embodiment of an apparatus for determining a world coordinate point cloud according to the present application;
fig. 6 is a block diagram of an electronic device for implementing a method of determining a world coordinate point cloud according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
It should be noted that, in the case of no conflict, the embodiments and features in the embodiments may be combined with each other. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of a method of determining a world coordinate point cloud or an apparatus of determining a world coordinate point cloud of the present application may be applied.
As shown in fig. 1, the system architecture 100 may include an onboard system (i.e., onboard brain) 101, a server (or cloud control platform) 102, a roadside camera 103, a roadside computing device 104, and a network 105. The network 105 serves as a medium to provide communication links between the in-vehicle system 101, the server 102 and the servers 102, the roadside computing device 104 and the servers 102, the roadside cameras 103. The network 105 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with server 102 over network 105 using on-board system 101 to receive or send messages, etc. Various communication client applications may be installed on the in-vehicle system 101, such as navigation-type applications, live applications, instant messaging tools, mailbox clients, social platform software, and the like.
The in-vehicle system 101 may be hardware or software. When the in-vehicle system 101 is hardware, it may be a variety of electronic devices with a display screen, including but not limited to smartphones, tablets, electronic book readers, laptop and desktop computers, and the like. When the in-vehicle system 101 is software, it can be installed in the above-listed electronic devices. Which may be implemented as multiple software or software modules (e.g., multiple software or software modules for providing distributed services) or as a single software or software module. The present invention is not particularly limited herein.
Server 102 may be a server that provides various services, such as a background server that provides support for in-vehicle system 101, roadside camera 103, and/or roadside computing device 104. The background server may perform processing such as analysis on the received data such as coordinates of the target pixel point of the obstacle, and feedback the processing result (for example, the world coordinate point cloud of the obstacle) to the terminal device.
The roadside computing device 104 may be connected to the roadside camera 103 and acquire images captured by the roadside camera 103.
It should be noted that, the method for determining the world coordinate point cloud provided in the embodiments of the present application may be performed by various roadside devices (such as the roadside camera 103 or the roadside computing device 104), the server (or the cloud control platform) 102 or the vehicle-mounted system 101, and accordingly, the apparatus for determining the world coordinate point cloud may be disposed in the various roadside devices, the server 102 or the vehicle-mounted system 101.
It should be understood that the number of in-vehicle systems, roadside cameras, roadside computing devices, networks, and servers in fig. 1 are merely illustrative. There may be any number of in-vehicle systems, roadside cameras, roadside computing devices, networks, and servers, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a method of determining a world coordinate point cloud according to the present application is shown. The method for determining the world coordinate point cloud comprises the following steps:
step 201, obtaining coordinates of a target pixel point of an obstacle in an image to be processed, which is shot by a target roadside camera.
In this embodiment, an execution body (for example, a roadside computing device, a roadside camera, an on-board system, a server, or a cloud control platform shown in fig. 1) on which the method for determining the world coordinate point cloud is executed may acquire coordinates of a target pixel point of an obstacle in an image to be processed captured by a target roadside camera. The coordinates may be coordinates of respective target positions of the obstacle, for example, coordinates of a pixel point where a center point of the obstacle is located.
The target roadside camera is fixed to a public facility such as a utility pole, and thus the pose of the camera is fixed. In this case, the world coordinate point cloud corresponding to the coordinates of each pixel point in the image captured by the roadside camera is determined.
Step 202, searching for a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera.
In this embodiment, the execution body may search, in a predetermined mapping relationship, a world coordinate point cloud corresponding to the coordinates of the target pixel point. Specifically, the mapping relationship is used for representing the corresponding relationship between coordinates of pixel points in the image shot by the target roadside camera and world coordinate point clouds in a world coordinate system. Thus, by using the mapping relationship, the execution subject can directly find the world coordinate point cloud corresponding to the coordinates of the target pixel point.
Step 203, determining the world coordinate point cloud of the obstacle in the world coordinate system based on the found world coordinate point cloud.
In this embodiment, the executing body may determine the world coordinate point cloud of the whole obstacle in the world coordinate system based on the found world coordinate point cloud. Specifically, the dimensions, i.e. the width and the height, of the obstacle may be fixed, if a world coordinate point cloud corresponding to the coordinates of the target pixel point of the obstacle, for example, a world coordinate point cloud of the center point of the obstacle, is determined, and the dimensions of the obstacle in the two-dimensional image are also determined, then the world coordinate point cloud corresponding to the dimensions of the obstacle in the world coordinate system may be determined based on the world coordinate point cloud corresponding to the target pixel point and the position of the target pixel point in the whole obstacle, i.e. the world coordinate point cloud corresponding to the obstacle is obtained.
The method provided by the embodiment of the application can rapidly and accurately determine the world coordinate point cloud of the obstacle by utilizing the mapping relation between the coordinates of the two-dimensional image and the world coordinate point cloud, and effectively saves operation resources. Meanwhile, the problems of high cost and calculation result deviation caused by uneven ground in the prior art are avoided.
With continued reference to fig. 3, fig. 3 is a schematic diagram of an application scenario of the method for determining a world coordinate point cloud according to the present embodiment. In the application scenario of fig. 3, the execution subject 301 acquires coordinates 302 of a target pixel point of an obstacle in an image to be processed captured by a target roadside camera. The execution subject 301 searches for a world coordinate point cloud 304 corresponding to the coordinates of the target pixel point in a mapping relationship 303 between coordinates and the world coordinate point cloud in an image captured by the target roadside camera. The execution body 301 determines a world coordinate point cloud 305 of the obstacle in the world coordinate system based on the found world coordinate point cloud 304.
With further reference to FIG. 4, a flow 400 of one embodiment of determining a mapping relationship is illustrated. The process 400 includes the steps of:
in step 401, a point cloud scanning result of the geographic area indicated by the target image is obtained, wherein the point cloud scanning result is dense point cloud obtained by vehicle scanning, and the target image is obtained by shooting the target road side camera for the geographic area.
In this embodiment, an execution body (for example, a roadside computing device, a roadside camera, an on-board system, a server, or a cloud control platform shown in fig. 1) or other electronic devices on which the method for determining the world coordinate point cloud is running may acquire a point cloud scanning result for the geographic area. The point cloud scanning result may be a point cloud obtained by performing point cloud scanning on the geographic area by the vehicle, where the point cloud may be a dense point cloud. The target image is an image shot by the target roadside camera. The target image includes image content corresponding to the geographic area, that is, the target image indicates that the geographic area is a shooting object of the target roadside camera.
The vehicle herein may be various vehicles capable of point cloud scanning such as an autonomous vehicle.
Step 402, extracting a ground point cloud from the point cloud scanning result, and converting the ground point cloud into a world coordinate system.
In this embodiment, the execution body may extract a ground point cloud from the point cloud scanning result, and convert the ground point cloud into a world coordinate system. Specifically, the execution body may extract the ground point cloud in the point cloud scanning result in various manners, such as a ground fitting algorithm.
In practice, the execution subject may convert the ground point cloud into the world coordinate system by using a transformation matrix from the vehicle coordinate system to the world coordinate system, that is, external parameters of the sensors of the vehicle, to obtain the ground point cloud in the world coordinate system.
Step 403, projecting the ground point cloud under the world coordinate system to the target image, and generating a mapping relation based on the projection result.
In this embodiment, the execution subject may project the converted ground point cloud in the world coordinate system back to the target image, and generate the mapping relationship based on the projection result of the present projection. In practice, the execution subject may generate the mapping relationship based on the projection result in various ways. For example, the execution subject may directly use the projection result as the mapping relationship.
According to the embodiment, the ground point cloud can be determined from the point cloud scanning result of the vehicle, an accurate mapping relation is generated, and further the world coordinate point cloud corresponding to the center of the bottom surface of the obstacle can be accurately determined.
In some alternative implementations of the present embodiment, step 403 may include: projecting a ground point cloud under a world coordinate system to a target image by utilizing external parameters of a target roadside camera to generate an initial mapping, wherein the initial mapping comprises mapping of coordinates of at least two pixel points of the target image and the world coordinate point cloud; in response to the fact that pixel points with no corresponding world coordinate point cloud exist in the initial mapping in the target image, bilinear interpolation processing is conducted on the world coordinate point cloud corresponding to coordinates of at least two pixel points, and a mapping relation is obtained, wherein the mapping relation comprises mapping of all the pixel points of the target image and the world coordinate point cloud, and the data type of the world coordinate point cloud in the mapping relation is a three-channel short integer.
In these alternative implementations, the execution subject may use the external parameters of the target roadside camera to project the ground point cloud under the world coordinate system to the target image, thereby generating an initial mapping, that is, an initial mapping relationship. The initial mapping may comprise a mapping between coordinates of a portion, i.e. not all, of the pixels of the target image and the world coordinate point cloud. That is, there are some pixels in the target image that do not have a corresponding world coordinate point cloud in the initial mapping. Therefore, the execution subject may perform bilinear interpolation processing on the world coordinate point clouds corresponding to the coordinates of the at least two pixels, so that a mapping relationship between the coordinates of all the pixels in the target image and the world coordinate point clouds exists in the mapping relationship.
Specifically, after the interpolation process, the execution subject may project the interpolated world coordinate point cloud into the target image, thereby obtaining a final mapping relationship.
In practice, the data type of the world coordinate point cloud in the mapping relationship may be a short integer type, that is, a short type. And, a three-dimensional world coordinate point cloud may be embodied by three channels. The units may be centimeters. The save format of one of the sets of mappings may be width height 3.
The implementation modes can supplement the world coordinate point clouds corresponding to all pixel points in the target image by utilizing bilinear interpolation, so that the mapping relation of the target image is supplemented.
In some optional application scenarios of these implementations, before bilinear interpolation processing is performed on the world coordinate point cloud corresponding to the coordinates of at least two pixel points in these implementations, the step of generating the mapping relationship may further include: and for the coordinates of any pixel point in the initial mapping, responding to the corresponding relation between the coordinates of the pixel point and at least two world coordinate point clouds in the initial mapping, and taking the centroid of at least two world coordinate point clouds as the world coordinate point cloud corresponding to the coordinates of the pixel point.
In these alternative application scenarios, there may be multiple world coordinate point clouds for the pixel point in the initial mapping. The execution subject may determine a centroid for the world coordinate point clouds, the centroid being the world coordinate point cloud. Then, the execution subject may directly use the centroid as a world coordinate point cloud corresponding to the coordinates of the pixel point.
The application scenes can be processed aiming at the condition that one pixel point corresponds to a plurality of world coordinate point clouds, so that one-to-one mapping in the mapping relationship is ensured under the condition that one-to-many mapping relationship occurs. And by determining the mass center, the world coordinate point cloud corresponding to the coordinates of the pixel points can be accurately found.
Optionally, the coordinates of the target pixel point of the obstacle are coordinates of a target center point of the obstacle, and the target center point is a bottom surface center point of the obstacle.
In particular, the target center point of the obstacle may be a geometric center point of the entire obstacle. Further, the target center point may be a center point of a bottom surface of the obstacle.
The optional application scenes can determine the position of the obstacle in the world coordinate system more accurately by taking the target central point as the target pixel point of the obstacle. In addition, the measurement error of the three-dimensional obstacle can be avoided by utilizing the bottom surface center point, and the accuracy of determining the world coordinate point cloud of the obstacle is improved.
With further reference to fig. 5, as an implementation of the method shown in the foregoing figures, the present application provides an embodiment of an apparatus for determining a world coordinate point cloud, where the embodiment of the apparatus corresponds to the embodiment of the method shown in fig. 2, and the embodiment of the apparatus may further include the same or corresponding features or effects as the embodiment of the method shown in fig. 2, except for the features described below. The device can be applied to various electronic equipment.
As shown in fig. 5, the apparatus 500 for determining a world coordinate point cloud according to the present embodiment includes: an acquisition unit 501, a search unit 502, and a determination unit 503. Wherein, the acquiring unit 501 is configured to acquire coordinates of a target pixel point of an obstacle in an image to be processed, which is shot by a target roadside camera; the searching unit 502 is configured to search the world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera; the determining unit 503 is configured to determine a world coordinate point cloud of the obstacle in the world coordinate system based on the found world coordinate point cloud.
In this embodiment, the specific processes of the acquiring unit 501, the searching unit 502 and the determining unit 503 of the apparatus 500 for determining the world coordinate point cloud and the technical effects thereof may refer to the relevant descriptions of the steps 201, 202 and 203 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the step of generating the mapping relationship includes: acquiring a point cloud scanning result of a geographic area indicated by a target image, wherein the point cloud scanning result is dense point cloud obtained by vehicle scanning, and the target image is obtained by shooting the geographic area by a target road side camera; extracting a ground point cloud from the point cloud scanning result, and converting the ground point cloud into a world coordinate system; and projecting the ground point cloud under the world coordinate system to the target image, and generating a mapping relation based on a projection result.
In some optional implementations of the present embodiment, projecting the ground point cloud under the world coordinate system onto the target image, generating the mapping relationship based on the projection result includes: projecting a ground point cloud under a world coordinate system to a target image by utilizing external parameters of a target roadside camera to generate an initial mapping, wherein the initial mapping comprises mapping of coordinates of at least two pixel points of the target image and the world coordinate point cloud; in response to the fact that pixel points with no corresponding world coordinate point cloud exist in the initial mapping in the target image, bilinear interpolation processing is conducted on the world coordinate point cloud corresponding to coordinates of at least two pixel points, and a mapping relation is obtained, wherein the mapping relation comprises mapping of all the pixel points of the target image and the world coordinate point cloud, and the data type of the world coordinate point cloud in the mapping relation is a three-channel short integer.
In some optional implementations of this embodiment, before performing bilinear interpolation processing on the world coordinate point cloud corresponding to the coordinates of at least two pixel points, the generating step further includes: and for the coordinates of any pixel point in the initial mapping, responding to the corresponding relation between the coordinates of the pixel point and at least two world coordinate point clouds in the initial mapping, and taking the centroid of at least two world coordinate point clouds as the world coordinate point cloud corresponding to the coordinates of the pixel point.
In some optional implementations of this embodiment, the coordinates of the target pixel point of the obstacle are coordinates of a target center point of the obstacle, which is a bottom surface center point of the obstacle.
The present application also provides a method for determining a world coordinate point cloud for a roadside computing device, the method may include: acquiring an image to be processed, which is shot by a target road side camera; acquiring coordinates of a target pixel point of an obstacle in an image to be processed; searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera; determining a world coordinate point cloud of the obstacle in a world coordinate system based on the searched world coordinate point cloud; and sending the determined world coordinate point cloud to a cloud control platform or a server.
In some optional implementations of the present embodiment, the step of generating the mapping relationship includes: acquiring a point cloud scanning result of a geographic area indicated by a target image, wherein the point cloud scanning result is dense point cloud obtained by vehicle scanning, and the target image is obtained by shooting the geographic area by a target road side camera; extracting a ground point cloud from the point cloud scanning result, and converting the ground point cloud into a world coordinate system; and projecting the ground point cloud under the world coordinate system to the target image, and generating a mapping relation based on a projection result.
In some optional implementations of the present embodiment, projecting the ground point cloud under the world coordinate system onto the target image, generating the mapping relationship based on the projection result includes: projecting a ground point cloud under a world coordinate system to a target image by utilizing external parameters of a target roadside camera to generate an initial mapping, wherein the initial mapping comprises mapping of coordinates of at least two pixel points of the target image and the world coordinate point cloud; in response to the fact that pixel points with no corresponding world coordinate point cloud exist in the initial mapping in the target image, bilinear interpolation processing is conducted on the world coordinate point cloud corresponding to coordinates of at least two pixel points, and a mapping relation is obtained, wherein the mapping relation comprises mapping of all the pixel points of the target image and the world coordinate point cloud, and the data type of the world coordinate point cloud in the mapping relation is a three-channel short integer.
In some optional implementations of this embodiment, before performing bilinear interpolation processing on the world coordinate point cloud corresponding to the coordinates of at least two pixel points, the generating step further includes: and for the coordinates of any pixel point in the initial mapping, responding to the corresponding relation between the coordinates of the pixel point and at least two world coordinate point clouds in the initial mapping, and taking the centroid of at least two world coordinate point clouds as the world coordinate point cloud corresponding to the coordinates of the pixel point.
In some optional implementations of this embodiment, the coordinates of the target pixel point of the obstacle are coordinates of a target center point of the obstacle, which is a bottom surface center point of the obstacle.
According to embodiments of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 6, a block diagram of an electronic device is provided for a method of determining a world coordinate point cloud according to an embodiment of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the application described and/or claimed herein.
As shown in fig. 6, the electronic device includes: one or more processors 601, memory 602, and interfaces for connecting the components, including high-speed interfaces and low-speed interfaces. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions executing within the electronic device, including instructions stored in or on memory to display graphical information of the GUI on an external input/output device, such as a display device coupled to the interface. In other embodiments, multiple processors and/or multiple buses may be used, if desired, along with multiple memories and multiple memories. Also, multiple electronic devices may be connected, each providing a portion of the necessary operations (e.g., as a server array, a set of blade servers, or a multiprocessor system). One processor 601 is illustrated in fig. 6.
Memory 602 is a non-transitory computer-readable storage medium provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of determining a world coordinate point cloud provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of determining a world coordinate point cloud provided by the present application.
The memory 602 is used as a non-transitory computer readable storage medium, and may be used to store a non-transitory software program, a non-transitory computer executable program, and modules, such as program instructions/modules (e.g., the acquisition unit 501, the search unit 502, and the determination unit 503 shown in fig. 5) corresponding to the method for determining a world coordinate point cloud in the embodiments of the present application. The processor 601 executes various functional applications of the server and data processing, i.e., implements the method of determining a world coordinate point cloud in the above-described method embodiments, by running non-transitory software programs, instructions, and modules stored in the memory 602.
The memory 602 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created from the use of the electronic device that determines the world coordinate point cloud, and the like. In addition, the memory 602 may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid-state storage device. In some embodiments, memory 602 may optionally include memory remotely located with respect to processor 601, which may be connected to an electronic device that determines a world coordinate point cloud via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of determining a world coordinate point cloud may further include: an input device 603 and an output device 604. The processor 601, memory 602, input device 603 and output device 604 may be connected by a bus or otherwise, for example in fig. 6.
The input device 603 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device that determine the world coordinate point cloud, such as a touch screen, a keypad, a mouse, a track pad, a touch pad, a pointer stick, one or more mouse buttons, a track ball, a joystick, and the like. The output means 604 may include a display device, auxiliary lighting means (e.g., LEDs), tactile feedback means (e.g., vibration motors), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device may be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASIC (application specific integrated circuit), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computing programs (also referred to as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present application may be implemented by software, or may be implemented by hardware. The described units may also be provided in a processor, for example, described as: a processor includes an acquisition unit, a lookup unit, and a determination unit. The names of these units do not constitute limitations on the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires coordinates of a target pixel point of an obstacle in an image to be processed captured by a target roadside camera".
As another aspect, the present application also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs which, when executed by the apparatus, cause the apparatus to: acquiring coordinates of target pixel points of obstacles in an image to be processed, which is shot by a target roadside camera; searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera; and determining the world coordinate point cloud of the obstacle in the world coordinate system based on the searched world coordinate point cloud.
The foregoing description is only of the preferred embodiments of the present application and is presented as a description of the principles of the technology being utilized. It will be appreciated by persons skilled in the art that the scope of the invention referred to in this application is not limited to the specific combinations of features described above, but it is intended to cover other embodiments in which any combination of features described above or equivalents thereof is possible without departing from the spirit of the invention. Such as the above-described features and technical features having similar functions (but not limited to) disclosed in the present application are replaced with each other.

Claims (11)

1. A method of determining a world coordinate point cloud, the method comprising:
acquiring coordinates of target pixel points of obstacles in an image to be processed, which is shot by a target roadside camera;
searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera;
determining a world coordinate point cloud of the obstacle in a world coordinate system based on the searched world coordinate point cloud;
the step of generating the mapping relation comprises the following steps:
acquiring a point cloud scanning result of a geographic area indicated by a target image, wherein the point cloud scanning result is dense point cloud obtained by vehicle scanning, and the target image is obtained by shooting the target road side camera for the geographic area;
extracting a ground point cloud from the point cloud scanning result, and converting the ground point cloud into a world coordinate system;
and projecting the ground point cloud under the world coordinate system to the target image, and generating the mapping relation based on a projection result.
2. The method of claim 1, wherein the projecting the ground point cloud in the world coordinate system to the target image, generating the mapping relationship based on a projection result, comprises:
projecting the ground point cloud under a world coordinate system to the target image by utilizing the external parameters of the target roadside camera to generate an initial mapping, wherein the initial mapping comprises mapping of coordinates of at least two pixel points of the target image and the world coordinate point cloud;
and in response to the pixel points of the world coordinate point cloud which does not exist in the initial mapping, bilinear interpolation processing is carried out on the world coordinate point cloud corresponding to the coordinates of the at least two pixel points, so that the mapping relation is obtained, wherein the mapping relation comprises mapping of all the pixel points of the target image and the world coordinate point cloud, and the data type of the world coordinate point cloud in the mapping relation is a three-channel short integer.
3. The method of claim 2, wherein, before the bilinear interpolation is performed on the world coordinate point cloud corresponding to the coordinates of the at least two pixels, the generating step further includes:
and for the coordinates of any pixel point in the initial mapping, responding to the corresponding relation between the coordinates of the pixel point and at least two world coordinate point clouds in the initial mapping, and taking the mass center of the at least two world coordinate point clouds as the world coordinate point cloud corresponding to the coordinates of the pixel point.
4. The method of claim 2, wherein the coordinates of the target pixel point of the obstacle are coordinates of a target center point of the obstacle, the target center point being a bottom surface center point of the obstacle.
5. An apparatus for determining a world coordinate point cloud, the apparatus comprising:
the acquisition unit is configured to acquire coordinates of target pixel points of the obstacle in the image to be processed, which is shot by the target roadside camera;
the searching unit is configured to search the world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera;
a determining unit configured to determine a world coordinate point cloud of the obstacle in a world coordinate system based on the found world coordinate point cloud;
the step of generating the mapping relation comprises the following steps:
acquiring a point cloud scanning result of a geographic area indicated by a target image, wherein the point cloud scanning result is dense point cloud obtained by vehicle scanning, and the target image is obtained by shooting the target road side camera for the geographic area;
extracting a ground point cloud from the point cloud scanning result, and converting the ground point cloud into a world coordinate system;
and projecting the ground point cloud under the world coordinate system to the target image, and generating the mapping relation based on a projection result.
6. The apparatus of claim 5, wherein the projecting the ground point cloud in the world coordinate system to the target image, generating the mapping relationship based on a projection result, comprises:
projecting the ground point cloud under a world coordinate system to the target image by utilizing the external parameters of the target roadside camera to generate an initial mapping, wherein the initial mapping comprises mapping of coordinates of at least two pixel points of the target image and the world coordinate point cloud;
and in response to the pixel points of the world coordinate point cloud which does not exist in the initial mapping, bilinear interpolation processing is carried out on the world coordinate point cloud corresponding to the coordinates of the at least two pixel points, so that the mapping relation is obtained, wherein the mapping relation comprises mapping of all the pixel points of the target image and the world coordinate point cloud, and the data type of the world coordinate point cloud in the mapping relation is a three-channel short integer.
7. The apparatus of claim 6, wherein the generating step further comprises, prior to the bilinear interpolation of the world coordinate point cloud corresponding to the coordinates of the at least two pixels:
and for the coordinates of any pixel point in the initial mapping, responding to the corresponding relation between the coordinates of the pixel point and at least two world coordinate point clouds in the initial mapping, and taking the mass center of the at least two world coordinate point clouds as the world coordinate point cloud corresponding to the coordinates of the pixel point.
8. The apparatus of claim 6, wherein the coordinates of the target pixel point of the obstacle are coordinates of a target center point of the obstacle, the target center point being a bottom surface center point of the obstacle.
9. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-4.
10. A computer readable storage medium having stored thereon a computer program, wherein the program when executed by a processor implements the method of any of claims 1-4.
11. A method for a roadside computing device to determine a world coordinate point cloud, the method comprising:
acquiring an image to be processed, which is shot by a target road side camera;
acquiring coordinates of a target pixel point of an obstacle in the image to be processed;
searching a world coordinate point cloud corresponding to the coordinates of the target pixel point in the mapping relation between the coordinates and the world coordinate point cloud in the image shot by the target roadside camera;
determining a world coordinate point cloud of the obstacle in a world coordinate system based on the searched world coordinate point cloud;
transmitting the determined world coordinate point cloud to a cloud control platform or a server;
the step of generating the mapping relation comprises the following steps:
acquiring a point cloud scanning result of a geographic area indicated by a target image, wherein the point cloud scanning result is dense point cloud obtained by vehicle scanning, and the target image is obtained by shooting the target road side camera for the geographic area;
extracting a ground point cloud from the point cloud scanning result, and converting the ground point cloud into a world coordinate system;
and projecting the ground point cloud under the world coordinate system to the target image, and generating the mapping relation based on a projection result.
CN202010966254.4A 2020-09-15 2020-09-15 Method and apparatus for determining world coordinate point cloud for roadside computing device Active CN112101209B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010966254.4A CN112101209B (en) 2020-09-15 2020-09-15 Method and apparatus for determining world coordinate point cloud for roadside computing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010966254.4A CN112101209B (en) 2020-09-15 2020-09-15 Method and apparatus for determining world coordinate point cloud for roadside computing device

Publications (2)

Publication Number Publication Date
CN112101209A CN112101209A (en) 2020-12-18
CN112101209B true CN112101209B (en) 2024-04-09

Family

ID=73758824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010966254.4A Active CN112101209B (en) 2020-09-15 2020-09-15 Method and apparatus for determining world coordinate point cloud for roadside computing device

Country Status (1)

Country Link
CN (1) CN112101209B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112634354B (en) * 2020-12-21 2021-08-13 紫清智行科技(北京)有限公司 Road side sensor-based networking automatic driving risk assessment method and device
CN112668460A (en) 2020-12-25 2021-04-16 北京百度网讯科技有限公司 Target detection method, electronic equipment, road side equipment and cloud control platform
CN112815979B (en) * 2020-12-30 2023-11-21 联想未来通信科技(重庆)有限公司 Sensor calibration method and device
CN114092916B (en) * 2021-11-26 2023-07-18 阿波罗智联(北京)科技有限公司 Image processing method, device, electronic equipment, automatic driving vehicle and medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105913488A (en) * 2016-04-15 2016-08-31 长安大学 Three-dimensional-mapping-table-based three-dimensional point cloud rapid reconstruction method
CN106813568A (en) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 object measuring method and device
WO2018127007A1 (en) * 2017-01-03 2018-07-12 成都通甲优博科技有限责任公司 Depth image acquisition method and system
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN109074668A (en) * 2018-08-02 2018-12-21 深圳前海达闼云端智能科技有限公司 Method for path navigation, relevant apparatus and computer readable storage medium
CN110244302A (en) * 2019-07-05 2019-09-17 苏州科技大学 Ground Synthetic Aperture Radar images cell coordinate three-dimension varying method
CN110738183A (en) * 2019-10-21 2020-01-31 北京百度网讯科技有限公司 Obstacle detection method and device
CN111079680A (en) * 2019-12-23 2020-04-28 北京三快在线科技有限公司 Temporary traffic signal lamp detection method and device and automatic driving equipment
CN111161338A (en) * 2019-12-26 2020-05-15 浙江大学 Point cloud density improving method for depth prediction based on two-dimensional image gray scale
WO2020103427A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Object detection method, related device and computer storage medium
CN111578839A (en) * 2020-05-25 2020-08-25 北京百度网讯科技有限公司 Obstacle coordinate processing method and device, electronic equipment and readable storage medium
CN111612852A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for verifying camera parameters
CN111612760A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for detecting obstacles

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106776996B (en) * 2016-12-02 2018-09-07 百度在线网络技术(北京)有限公司 Method and apparatus for the accuracy for testing high-precision map
CN113486797B (en) * 2018-09-07 2023-08-11 百度在线网络技术(北京)有限公司 Unmanned vehicle position detection method, unmanned vehicle position detection device, unmanned vehicle position detection equipment, storage medium and vehicle

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106813568A (en) * 2015-11-27 2017-06-09 阿里巴巴集团控股有限公司 object measuring method and device
CN105913488A (en) * 2016-04-15 2016-08-31 长安大学 Three-dimensional-mapping-table-based three-dimensional point cloud rapid reconstruction method
WO2018127007A1 (en) * 2017-01-03 2018-07-12 成都通甲优博科技有限责任公司 Depth image acquisition method and system
CN108986161A (en) * 2018-06-19 2018-12-11 亮风台(上海)信息科技有限公司 A kind of three dimensional space coordinate estimation method, device, terminal and storage medium
CN109074668A (en) * 2018-08-02 2018-12-21 深圳前海达闼云端智能科技有限公司 Method for path navigation, relevant apparatus and computer readable storage medium
WO2020024234A1 (en) * 2018-08-02 2020-02-06 深圳前海达闼云端智能科技有限公司 Route navigation method, related device, and computer readable storage medium
WO2020103427A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Object detection method, related device and computer storage medium
CN110244302A (en) * 2019-07-05 2019-09-17 苏州科技大学 Ground Synthetic Aperture Radar images cell coordinate three-dimension varying method
CN110738183A (en) * 2019-10-21 2020-01-31 北京百度网讯科技有限公司 Obstacle detection method and device
CN111079680A (en) * 2019-12-23 2020-04-28 北京三快在线科技有限公司 Temporary traffic signal lamp detection method and device and automatic driving equipment
CN111161338A (en) * 2019-12-26 2020-05-15 浙江大学 Point cloud density improving method for depth prediction based on two-dimensional image gray scale
CN111612852A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for verifying camera parameters
CN111612760A (en) * 2020-05-20 2020-09-01 北京百度网讯科技有限公司 Method and apparatus for detecting obstacles
CN111578839A (en) * 2020-05-25 2020-08-25 北京百度网讯科技有限公司 Obstacle coordinate processing method and device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Hyperspectral image super-resolution with spectral-spatial network.;Jia, Jinrang 等;《International Journal of Remote Sensing》;第39卷(第22期);7806-7829 *
点云下地平面检测的RGB-D相机外参自动标定;孙士杰;宋焕生;张朝阳;张文涛;王璇;;中国图象图形学报(第06期);92-99 *

Also Published As

Publication number Publication date
CN112101209A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112101209B (en) Method and apparatus for determining world coordinate point cloud for roadside computing device
EP3869399A2 (en) Vehicle information detection method and apparatus, electronic device, storage medium and program
CN110793544B (en) Method, device and equipment for calibrating parameters of roadside sensing sensor and storage medium
CN111274343A (en) Vehicle positioning method and device, electronic equipment and storage medium
US11713970B2 (en) Positioning method, electronic device and computer readable storage medium
CN111462029B (en) Visual point cloud and high-precision map fusion method and device and electronic equipment
CN111401251B (en) Lane line extraction method, lane line extraction device, electronic equipment and computer readable storage medium
EP3968266B1 (en) Obstacle three-dimensional position acquisition method and apparatus for roadside computing device
KR20210036317A (en) Mobile edge computing based visual positioning method and device
CN111721281B (en) Position identification method and device and electronic equipment
CN111079079B (en) Data correction method, device, electronic equipment and computer readable storage medium
WO2019138597A1 (en) System and method for assigning semantic label to three-dimensional point of point cloud
CN110675635B (en) Method and device for acquiring external parameters of camera, electronic equipment and storage medium
KR102566300B1 (en) Method for indoor localization and electronic device
CN112668428A (en) Vehicle lane change detection method, roadside device, cloud control platform and program product
CN112288825A (en) Camera calibration method and device, electronic equipment, storage medium and road side equipment
CN112184914A (en) Method and device for determining three-dimensional position of target object and road side equipment
CN112344855A (en) Obstacle detection method and device, storage medium and drive test equipment
CN111666876A (en) Method and device for detecting obstacle, electronic equipment and road side equipment
CN111311743B (en) Three-dimensional reconstruction precision testing method and device and electronic equipment
CN113483774A (en) Navigation method, navigation device, electronic equipment and readable storage medium
CN115578433A (en) Image processing method, image processing device, electronic equipment and storage medium
CN111400537B (en) Road element information acquisition method and device and electronic equipment
CN112102417A (en) Method and device for determining world coordinates and external reference calibration method for vehicle-road cooperative roadside camera
CN111949816A (en) Positioning processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20211011

Address after: 100176 101, floor 1, building 1, yard 7, Ruihe West 2nd Road, Beijing Economic and Technological Development Zone, Daxing District, Beijing

Applicant after: Apollo Zhilian (Beijing) Technology Co.,Ltd.

Address before: 2 / F, baidu building, No. 10, Shangdi 10th Street, Haidian District, Beijing 100085

Applicant before: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant