CN110717918B - Pedestrian detection method and device - Google Patents

Pedestrian detection method and device Download PDF

Info

Publication number
CN110717918B
CN110717918B CN201910962342.4A CN201910962342A CN110717918B CN 110717918 B CN110717918 B CN 110717918B CN 201910962342 A CN201910962342 A CN 201910962342A CN 110717918 B CN110717918 B CN 110717918B
Authority
CN
China
Prior art keywords
pedestrian
point cloud
point
stable region
road
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910962342.4A
Other languages
Chinese (zh)
Other versions
CN110717918A (en
Inventor
高斌
刘祥
张双
朱晓星
薛晶晶
王俊平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN201910962342.4A priority Critical patent/CN110717918B/en
Publication of CN110717918A publication Critical patent/CN110717918A/en
Application granted granted Critical
Publication of CN110717918B publication Critical patent/CN110717918B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Geometry (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure relates to the field of autopilot technology. The embodiment of the disclosure discloses a pedestrian detection method and device. The method comprises the following steps: extracting pedestrian point cloud from the road point cloud frame; extracting a stable region of the point cloud of the pedestrian, wherein the form change amplitude of the stable region is smaller than that of other regions of the pedestrian when the pedestrian moves; calculating the coordinates of the central point of the extracted stable region; and determining the motion information of the pedestrian based on the coordinates of the central point of the stable region of the pedestrian in a plurality of continuous road point cloud frames. The method realizes accurate detection of the motion state of the pedestrian.

Description

Pedestrian detection method and device
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to the technical field of automatic driving, and particularly relates to a pedestrian detection method and device.
Background
In an autonomous driving scenario, a vehicle-mounted lidar is typically used to sense the road environment. Other participating subjects in the road environment include mainly vehicles and pedestrians. Generally, a vehicle has a stable structure, and its form does not change with a moving track, and a driving route is generally a route along a lane line direction. The pedestrian often sways in the walking posture, and the walking randomness is strong. The accuracy of pedestrian motion state detection is promoted, and the driving safety of the automatic driving vehicle can be effectively guaranteed.
Disclosure of Invention
Embodiments of the present disclosure provide a pedestrian detection method and apparatus, an electronic device, and a computer-readable medium.
In a first aspect, an embodiment of the present disclosure provides a pedestrian detection method, including: extracting pedestrian point cloud from the road point cloud frame; extracting a stable region of the point cloud of the pedestrian, wherein the form change amplitude of the stable region is smaller than that of other regions of the pedestrian when the pedestrian moves; calculating the coordinates of the central point of the extracted stable region; and determining the motion information of the pedestrian based on the coordinates of the central point of the stable region of the pedestrian in a plurality of continuous road point cloud frames.
In some embodiments, the above extracting the stable region of the point cloud of the pedestrian includes: acquiring the longitudinal coordinate of the highest point in the point cloud of the pedestrian; and determining the point cloud in the preset distance range from the highest point to the lower part in the point cloud of the pedestrian as the point cloud of the stable area.
In some embodiments, the above extracting the stable region of the point cloud of the pedestrian includes:
the body part of the pedestrian is identified based on the point cloud of the pedestrian, and the point cloud of the body part is extracted to serve as the point cloud of the stable region of the pedestrian.
In some embodiments, the extracting the point cloud of the pedestrian from the road point cloud frame includes: in response to the fact that the current road point cloud frame contains the point clouds of a plurality of pedestrians, determining the search range of the target pedestrian in the current road point cloud frame based on the point cloud of the target pedestrian in the last road point cloud frame; and identifying the point cloud of the pedestrian in the search range, and taking the point cloud as the point cloud of the target pedestrian in the extracted current road point cloud frame.
In some embodiments, the determining the motion information of the pedestrian based on the coordinates of the central point of the stable region of the pedestrian in the plurality of continuous road point cloud frames includes: and calculating the moving speed of the central point as the moving speed of the pedestrian based on the moving distance of the central point of the stable region of the pedestrian in the road point cloud frame and the acquisition time of each road point cloud frame.
In a second aspect, an embodiment of the present disclosure provides a pedestrian detection apparatus including: a first extraction unit configured to extract a point cloud of a pedestrian from the road point cloud frame; a second extraction unit configured to extract a stable region from the point cloud of the pedestrian, wherein a morphological change amplitude of the stable region when the pedestrian moves is smaller than morphological change amplitudes of other regions of the pedestrian; a calculation unit configured to calculate center point coordinates of the extracted stable region; a determination unit configured to determine motion information of the pedestrian based on center point coordinates of stable regions of the pedestrian in the plurality of continuous road point cloud frames.
In some embodiments, the second extraction unit is configured to extract the stable region of the point cloud of the pedestrian as follows: acquiring the longitudinal coordinate of the highest point in the point cloud of the pedestrian; and determining the point cloud in the preset distance range from the highest point to the lower part in the point cloud of the pedestrian as the point cloud of the stable area.
In some embodiments, the second extraction unit is configured to extract the stable region of the point cloud of the pedestrian as follows: and identifying the body part of the pedestrian based on the point cloud of the pedestrian, and extracting the point cloud of the body part as the point cloud of the stable region of the pedestrian.
In some embodiments, the first extraction unit is configured to extract the point cloud of the pedestrian from the road point cloud frame as follows: in response to the fact that the current road point cloud frame contains the point clouds of a plurality of pedestrians, determining the search range of the target pedestrian in the current road point cloud frame based on the point cloud of the target pedestrian in the last road point cloud frame; and identifying the point cloud of the pedestrian in the search range, and taking the point cloud as the point cloud of the target pedestrian in the extracted current road point cloud frame.
In some embodiments, the determination unit is configured to determine the motion information of the pedestrian as follows: and calculating the moving speed of the central point as the moving speed of the pedestrian based on the moving distance of the central point of the stable region of the pedestrian in the road point cloud frame and the acquisition time of each road point cloud frame.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; a storage device for storing one or more programs which, when executed by one or more processors, cause the one or more processors to implement the pedestrian detection method as provided in the first aspect.
In a fourth aspect, an embodiment of the present disclosure provides a computer-readable medium on which a computer program is stored, wherein the program, when executed by a processor, implements the pedestrian detection method provided in the first aspect.
According to the pedestrian detection method and device, the electronic device and the computer readable medium of the embodiment of the disclosure, the point cloud of the pedestrian is extracted from the road point cloud frame, then the stable region extraction is performed on the point cloud of the pedestrian, wherein the form change amplitude of the stable region when the pedestrian moves is smaller than the form change amplitude of other regions of the pedestrian, then the central point coordinate of the extracted stable region is calculated, and finally the motion information of the pedestrian is determined based on the central point coordinate of the stable region of the pedestrian in a plurality of continuous road point cloud frames, so that the accurate detection of the motion state of the pedestrian is realized.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which embodiments of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a pedestrian detection method according to the present disclosure;
FIG. 3 is a flow chart of another embodiment of a pedestrian detection method according to the present disclosure;
FIG. 4 is a schematic structural diagram of one embodiment of a pedestrian detection apparatus of the present disclosure;
FIG. 5 is a schematic block diagram of a computer system suitable for use in implementing an electronic device of an embodiment of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary system architecture 100 to which the pedestrian detection method or pedestrian detection apparatus of the present disclosure may be applied.
As shown in fig. 1, system architecture 100 may include an autonomous vehicle 101, a network 102, and a server 103. Network 102 is used as a medium to provide a communication link between autonomous vehicle 101 and server 103. Network 102 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A laser radar 1011 may be disposed on the autonomous vehicle 101, and the laser radar 1011 is configured to collect point cloud data of the environment surrounding the autonomous vehicle. A processing unit 1012 may also be disposed on autonomous vehicle 101, where processing unit 1012 is configured to process data sensed by autonomous vehicle 101, make driving decisions, and so forth.
Autonomous vehicle 101 may interact with server 103 over network 102 to send data to server 103 or receive data from server 103. The server 103 may be a server that provides background support for the autonomous vehicle 101, and may analyze the environmental data sensed by the autonomous vehicle 101 and feed back the analysis results to the autonomous vehicle.
In the application scenario of the present disclosure, the autonomous vehicle 101 may send data point cloud data acquired by the laser radar 1011 to the server 103 through the processing unit 1012, the server 103 may perform obstacle detection and identification according to the received data point cloud data, return the detection and identification result to the autonomous vehicle 101, and the autonomous vehicle 101 performs driving decision according to the obstacle detection and identification result. Alternatively, the server 103 may also make a driving decision according to the obstacle detection and recognition result, and feed back a decision instruction to the autonomous vehicle 101.
The server 103 may be hardware or software. When the server 103 is hardware, it may be implemented as a distributed server cluster composed of a plurality of servers, or may be implemented as a single server. When the server 103 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
The pedestrian detection method provided by the embodiment of the present disclosure may be executed by the server 103, and accordingly, a pedestrian detection device may be provided in the server 103.
Alternatively, the pedestrian detection method provided by the embodiment of the present disclosure may also be executed by the processing unit 1012 on the autonomous vehicle 101, and accordingly, the pedestrian detection device may be provided in the processing unit 1012 on the autonomous vehicle 101.
It should be understood that the number of autonomous vehicles, networks, servers, lidar, processing units in fig. 1 is merely illustrative. There may be any number of autonomous vehicles, networks, servers, lidar, processing units, as desired for implementation.
With continued reference to fig. 2, a flow 200 of one embodiment of a pedestrian detection method in accordance with the present disclosure is shown. The pedestrian detection method comprises the following steps:
step 201, extracting a pedestrian point cloud from the road point cloud frame.
In this embodiment, the executing subject of the pedestrian detection method may first obtain a road point cloud frame acquired by a vehicle-mounted laser radar. The road point cloud frame is a frame of point cloud in a point cloud sequence acquired by the vehicle-mounted laser radar. Generally, a vehicle-mounted laser radar scans the surrounding environment according to a certain frequency, and each scanning period forms a point cloud frame. During the driving process of the automatic driving vehicle, the vehicle-mounted laser radar can periodically emit laser and receive reflection information to sequentially generate a plurality of road point cloud frames.
The executing body can be directly connected with the vehicle-mounted laser radar to acquire the road point cloud frame acquired by the executing body in real time, or can acquire the point cloud frame in the stored road point cloud sequence from other equipment or a storage unit.
Then, the road point cloud frame can be analyzed, and the point cloud of the pedestrian is extracted. Generally, the road environment is complex, and the collected laser point cloud data includes data points of traffic signs such as lane lines, guideboards, speed limit signs, traffic lights and the like, and may also include data points of other traffic participants such as pedestrians, vehicles and the like. In this embodiment, the three-dimensional data points in the road point cloud frame may be clustered and segmented, and then each segmented region is subjected to feature extraction to identify a point cloud representing a pedestrian contour, that is, a point cloud of a pedestrian. Here, the extracted features may be matched according to the contour features of the pedestrian, so as to identify the point cloud of the pedestrian.
Optionally, a method based on deep learning may also be adopted to extract the point cloud of the pedestrian from the road point cloud frame. For example, the trained deep neural network may be used to classify the objects in the road point cloud frame, and identify the point cloud of the pedestrian.
Step 202, extracting a stable region of the point cloud of the pedestrian.
Wherein, the form change amplitude of the stable region when the pedestrian moves is smaller than that of other regions of the pedestrian. In this embodiment, a portion of the human body in which the form change width during walking is smaller than the preset range may be used as the stable region. It can be understood that the posture change of the head, the shoulders and the main trunk is small in range, and the posture change of the limbs is large in service. The head, the shoulders and the main trunk can be used as stable areas, and point clouds of the stable areas are extracted.
In some optional implementation manners of this embodiment, a regional form change amplitude analysis may be performed on the point cloud of the same pedestrian extracted from a plurality of continuous road point cloud frames, for example, after the pedestrian point cloud is partitioned into regions, the change amplitude and the change rate of the form of each region between a plurality of continuous frames are calculated, so that a region with a smaller change amplitude is extracted as a stable region.
In some optional implementations of the present embodiment, the extracting of the stable region may be performed as follows: and acquiring the longitudinal coordinate of the highest point in the pedestrian point cloud, and determining the point cloud in the pedestrian point cloud within a preset distance range from the highest point to the lower part as the point cloud of the stable area.
Specifically, after the point cloud of the pedestrian is extracted in step 201, according to the longitudinal coordinate of the highest point (the point with the largest z coordinate) in the point cloud of the pedestrian, that is, the height of the vertex point of the pedestrian, the point cloud within a preset distance range may be taken downward from the highest point, for example, the point cloud within 30 centimeters below the highest point in the point cloud of the pedestrian is taken as the point cloud of the stable region. Therefore, the point cloud above the chest of the pedestrian can be quickly extracted as the point cloud of the stable area.
In other alternative implementations of this embodiment, the extraction of the stable region may be performed as follows: and identifying the body part of the pedestrian based on the point cloud of the pedestrian, and extracting the point cloud of the body part as the point cloud of the stable region of the pedestrian.
When the human body walks, the swing amplitude of the trunk part is small, the morphological change of the trunk part is small relative to other parts such as four limbs, the head and the like, namely, the trunk part is more stable relative to other regions of the human body, so that the point cloud of the trunk part can be extracted as the point cloud of the stable region of the human body. Specifically, the point cloud of the pedestrian can be subjected to regional fitting based on the shape, the aspect ratio and the position characteristics relative to the head (the position of the head can be determined by searching the highest point in the point cloud of the pedestrian), so as to extract the point cloud of the trunk part.
And step 203, calculating the coordinates of the central point of the extracted stable region.
For each road point cloud frame, the central point coordinates of the point cloud of the stable region of the extracted pedestrian can be calculated and used as the position coordinates of the pedestrian at the moment of collecting the corresponding road point cloud frame. The coordinates of all points in the stable area can be averaged to obtain the coordinate of an average point as the coordinate of the central point; alternatively, after the coordinates of all the points are averaged to obtain the coordinates of the average point, one point closest to the average point in the stable region may be taken as the center point, and the coordinates of the one point closest to the average point may be taken as the coordinates of the center point.
And step 204, determining the motion information of the pedestrian based on the central point coordinates of the stable region of the pedestrian in the plurality of continuous road point cloud frames.
The motion information may include a motion direction and/or a motion trajectory. The coordinates of the central point of the stable region of the pedestrian in each road point cloud frame calculated in step 203 may be used as the position coordinates of the pedestrian, and the position change direction of the pedestrian is determined according to the position coordinates of the pedestrian in the middle of the multiple frames, so as to determine the moving direction of the pedestrian. Or acquiring the acquisition time of each of the plurality of road point cloud frames, and associating the coordinates of the central point of the stable region of the pedestrian in each frame with the corresponding acquisition time to form the motion track of the pedestrian.
In some optional implementations of this embodiment, the motion information may further include a movement speed. The moving speed of the central point of the stable region can be calculated as the moving speed of the pedestrian based on the moving distance of the central point of the stable region of the pedestrian in the road point cloud frame and the acquisition time of each road point cloud frame. Here, the collection time of each road point cloud frame may be obtained from raw data collected by the laser radar, and the movement distance of the central point may be calculated according to coordinates of the central point in the corresponding frame.
According to the pedestrian detection method, the point cloud of the pedestrian is extracted from the road point cloud frame, then the stable region extraction is carried out on the point cloud of the pedestrian, wherein the form change amplitude of the stable region when the pedestrian moves is smaller than that of other regions of the pedestrian, then the central point coordinate of the extracted stable region is calculated, finally the motion information of the pedestrian is determined based on the central point coordinate of the stable region of the pedestrian in a plurality of continuous road point cloud frames, the estimation deviation of the motion state of the pedestrian caused by the swing of the posture in the walking process of the pedestrian can be reduced, and the accuracy of the detection of the motion state of the pedestrian is improved.
With continued reference to fig. 3, a flow diagram of another embodiment of a pedestrian detection method in accordance with the present disclosure is shown. As shown in fig. 3, a flow 300 of the pedestrian detection method of the embodiment includes the following steps:
step 301, in response to detecting that the current road point cloud frame contains point clouds of multiple pedestrians, determining a search range of a target pedestrian in the current road point cloud frame based on the point cloud of the target pedestrian in the previous road point cloud frame.
In this embodiment, an executing subject of the pedestrian detection method may first obtain a road point cloud frame acquired by a vehicle-mounted laser radar. In practice, the vehicle-mounted laser radar can periodically acquire point cloud data of the surrounding environment, a point cloud obtained by scanning in a scanning period forms a point cloud frame, and a plurality of continuous point cloud frames form a point cloud frame sequence. The execution main body can be connected with the vehicle-mounted laser radar to sequentially acquire the collected road point cloud frames.
The obtained point cloud of the current road point cloud frame can be clustered, the point cloud is divided into a plurality of areas according to a clustering result, and the areas are matched based on morphological characteristics of a human body, so that the point cloud of each pedestrian is extracted.
The road point cloud frame can comprise a plurality of point clouds of pedestrians, and if the current road point cloud frame is detected to comprise the point clouds of the pedestrians, the search range of the target pedestrian in the current road point cloud frame can be determined based on the position of the target pedestrian in the last road point cloud frame of the current road point cloud frame. Here, the target pedestrian is one of the pedestrians detected in the previous road point cloud frame.
Specifically, the boundary of the area where the pedestrian is located in the previous road point cloud frame may be expanded outward by a preset multiple (e.g., 1.5 times) and mapped to the boundary of the search range of the target pedestrian in the current road point cloud frame. Or the coordinate of the central point of the point cloud of the pedestrian in the above one road point cloud frame is the central point of the search range of the target pedestrian in the current road point cloud frame, and a spherical space area with the central point as the sphere center and the preset length as the radius/a cuboid area with the preset length as the side length is determined as the search range of the target pedestrian.
For each pedestrian detected in the current road point cloud frame, each of the pedestrians can be respectively used as a target pedestrian, and the search range of the pedestrian in the current road point cloud frame is determined based on step 301.
Step 302, identifying the point cloud of the pedestrian in the search range of the target pedestrian, and taking the point cloud as the point cloud of the target pedestrian in the extracted current road point cloud frame.
After determining the search range of a certain target pedestrian, whether a point cloud of a human body exists in the search range can be detected. If yes, the detected point cloud of the human body can be determined to be the point cloud of the target pedestrian in the current road point cloud frame. In this way, the point cloud of the target pedestrian can be searched only in a small search range without searching in the entire point cloud frame. Moreover, different pedestrians can be searched in corresponding search ranges respectively, so that the situation that different pedestrians are identified as the same pedestrian can be effectively avoided, and the accuracy of pedestrian detection is improved.
Step 303, extracting a stable region of the point cloud of the pedestrian.
For each target pedestrian identified in step 302, the stable region of the target pedestrian may be extracted using the method described in step 202 of the previous embodiment. Wherein, the form change amplitude of the stable region when the pedestrian moves is smaller than that of other regions of the pedestrian.
And step 304, calculating the coordinates of the central point of the extracted stable region.
Likewise, the center point coordinates of the stable region of each target pedestrian may be calculated as the position coordinates of the corresponding target pedestrian by the method described in step 203 of the foregoing embodiment.
Step 305, determining the motion information of the pedestrian based on the coordinates of the central point of the stable region of the pedestrian in a plurality of continuous road point cloud frames.
Then, for each target pedestrian, the moving distance of the central point coordinate of the stable area between a plurality of continuous road point cloud frames can be respectively calculated, and further the moving direction and the track information of each target pedestrian are obtained. The moving speed of the target pedestrian can be calculated by combining the collection time interval of collecting each road point cloud frame, so that the detection of the motion state of the pedestrian is realized.
In the process 300 of the pedestrian detection method of this embodiment, the search ranges of the target pedestrians are respectively determined in the road point cloud frame, the point clouds of different target pedestrians are respectively extracted and the stable regions are further extracted, the central point coordinates of the stable regions are respectively calculated for each target pedestrian, and then the motion state of each target pedestrian is estimated, so that the rapid and accurate detection and the motion state estimation of a plurality of pedestrians are realized.
With further reference to fig. 4, as an implementation of the above-described pedestrian detection method, the present disclosure provides an embodiment of a pedestrian detection apparatus, which corresponds to the method embodiments shown in fig. 2 and 3, and which may be particularly applied in various electronic devices.
As shown in fig. 4, the pedestrian detection device 400 of the present embodiment includes: a first extraction unit 401, a second extraction unit 402 and a third extraction unit 404. Wherein the first extraction unit 401 is configured to extract a point cloud of a pedestrian from the road point cloud frame; the second extraction unit 402 is configured to extract a stable region from the point cloud of the pedestrian, wherein the form change amplitude of the stable region when the pedestrian moves is smaller than that of other regions of the pedestrian; the calculation unit 403 is configured to calculate center point coordinates of the extracted stable region; the determination unit 404 is configured to determine motion information of the pedestrian based on center point coordinates of stable regions of the pedestrian in a plurality of continuous road point cloud frames.
In some embodiments, the second extraction unit 402 may be configured to extract the stable region of the point cloud of the pedestrian as follows: acquiring the longitudinal coordinate of the highest point in the point cloud of the pedestrian; and determining the point cloud in the preset distance range from the highest point to the lower part in the point cloud of the pedestrian as the point cloud of the stable area.
In some embodiments, the second extraction unit 402 may be configured to extract the stable region of the point cloud of the pedestrian as follows: and identifying the body part of the pedestrian based on the point cloud of the pedestrian, and extracting the point cloud of the body part as the point cloud of the stable region of the pedestrian.
In some embodiments, the first extraction unit 401 may be configured to extract the point cloud of the pedestrian from the road point cloud frame as follows: in response to the fact that the current road point cloud frame contains the point clouds of a plurality of pedestrians, determining the search range of the target pedestrian in the current road point cloud frame based on the point cloud of the target pedestrian in the last road point cloud frame; and identifying the point cloud of the pedestrian in the search range, and taking the point cloud as the point cloud of the target pedestrian in the extracted current road point cloud frame.
In some embodiments, the determining unit 404 may be configured to determine the motion information of the pedestrian as follows: and calculating the moving speed of the central point as the moving speed of the pedestrian based on the moving distance of the central point of the stable region of the pedestrian in the road point cloud frame and the acquisition time of each road point cloud frame.
It should be understood that the elements recited in apparatus 400 correspond to various steps in the methods described with reference to fig. 2 and 3. Thus, the operations and features described above for the method for generating a model are equally applicable to the apparatus 400 and the units contained therein and will not be described in detail here.
The pedestrian detection device 400 of the above embodiment of the present disclosure extracts the point cloud of the pedestrian from the road point cloud frame, then performs stable region extraction on the point cloud of the pedestrian, wherein the morphological change amplitude of the stable region when the pedestrian moves is smaller than the morphological change amplitudes of other regions of the pedestrian, then calculates the central point coordinates of the extracted stable region, and finally determines the motion information of the pedestrian based on the central point coordinates of the stable region of the pedestrian in a plurality of continuous road point cloud frames, thereby realizing accurate detection of the motion state of the pedestrian.
Referring now to FIG. 5, a schematic diagram of an electronic device (e.g., the server shown in FIG. 1) 500 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; a storage device 508 including, for example, a hard disk; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of embodiments of the present disclosure. It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: extracting a point cloud of a pedestrian from the road point cloud frame; extracting a stable region of the point cloud of the pedestrian, wherein the form change amplitude of the stable region is smaller than that of other regions of the pedestrian when the pedestrian moves; calculating the coordinates of the central point of the extracted stable region; and determining the motion information of the pedestrian based on the coordinates of the central point of the stable region of the pedestrian in a plurality of continuous road point cloud frames.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first extraction unit, a second extraction, a calculation unit, and a determination unit. The names of these units do not in some cases constitute a limitation on the unit itself, and for example, the first extraction unit may also be described as a "unit that extracts a point cloud of pedestrians from a road point cloud frame".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is possible without departing from the inventive concept as defined above. For example, the above features may be replaced with (but not limited to) features having similar functions disclosed in the present application.

Claims (12)

1. A pedestrian detection method, comprising:
respectively extracting pedestrian point clouds from a plurality of continuous road point cloud frames;
carrying out regional morphological change amplitude analysis on the extracted clouds of each point, and selecting a region with morphological change amplitude smaller than a preset threshold value as a stable region, wherein the morphological change amplitude of the stable region when the pedestrian moves is smaller than the morphological change amplitude of other regions of the pedestrian, and the morphological change amplitude comprises morphological change amplitude and change rate;
calculating the coordinates of the central point of the extracted stable region;
determining motion information of the pedestrian based on center point coordinates of stable regions of the pedestrian in a plurality of continuous road point cloud frames;
the calculating the coordinates of the central point of the extracted stable region includes: and determining the coordinates of the extracted average points of the point clouds, and taking the coordinates of the points closest to the average points in the extracted stable region as the coordinates of the central point.
2. The method of claim 1, wherein the performing stable region extraction on the point cloud of the pedestrian comprises:
acquiring the longitudinal coordinate of the highest point in the point cloud of the pedestrian;
and determining the point cloud in the preset distance range from the highest point to the lower part in the point cloud of the pedestrian as the point cloud of the stable area.
3. The method of claim 1, wherein the performing stable region extraction on the point cloud of the pedestrian comprises:
and identifying a body part of the pedestrian based on the point cloud of the pedestrian, and extracting the point cloud of the body part as the point cloud of a stable region of the pedestrian.
4. The method of claim 1, wherein said extracting a point cloud of pedestrians from a road point cloud frame comprises:
determining a search range of a target pedestrian in the current road point cloud frame based on the point cloud of the target pedestrian in the last road point cloud frame in response to the fact that the point cloud of the target pedestrian in the current road point cloud frame is detected;
and identifying the point cloud of the pedestrian in the search range, and taking the point cloud as the point cloud of the target pedestrian in the extracted current road point cloud frame.
5. The method of any one of claims 1-4, wherein the determining motion information of the pedestrian based on center point coordinates of a stable region of the pedestrian in a plurality of consecutive road point cloud frames comprises:
and calculating the moving speed of the central point as the moving speed of the pedestrian based on the moving distance of the central point of the stable region of the pedestrian in the road point cloud frame and the acquisition time of each road point cloud frame.
6. A pedestrian detection apparatus comprising:
a first extraction unit configured to extract point clouds of pedestrians from a plurality of continuous road point cloud frames, respectively;
the second extraction unit is configured to perform regional morphological change amplitude analysis on the extracted clouds of each point, and select a region with morphological change amplitude smaller than a preset threshold as a stable region to extract, wherein the morphological change amplitude of the stable region when the pedestrian moves is smaller than morphological change amplitudes of other regions of the pedestrian, and the morphological change amplitudes include morphological change amplitudes and change rates;
a calculation unit configured to calculate center point coordinates of the extracted stable region;
a determination unit configured to determine motion information of the pedestrian based on center point coordinates of stable regions of the pedestrian in a plurality of continuous road point cloud frames;
wherein the computing unit is further configured to: and determining the coordinates of the extracted average points of the point clouds, and taking the coordinates of the points closest to the average points in the extracted stable region as the coordinates of the central point.
7. The apparatus according to claim 6, wherein the second extraction unit is configured to perform stable region extraction on the point cloud of the pedestrian as follows:
acquiring the longitudinal coordinate of the highest point in the point cloud of the pedestrian;
and determining the point cloud in the preset distance range from the highest point to the lower part in the point cloud of the pedestrian as the point cloud of the stable area.
8. The apparatus according to claim 6, wherein the second extraction unit is configured to perform stable region extraction on the point cloud of the pedestrian as follows:
and identifying a body part of the pedestrian based on the point cloud of the pedestrian, and extracting the point cloud of the body part as the point cloud of a stable region of the pedestrian.
9. The apparatus according to claim 6, wherein the first extraction unit is configured to extract a point cloud of a pedestrian from a road point cloud frame as follows:
in response to the fact that the current road point cloud frame contains the point clouds of a plurality of pedestrians, determining the search range of the target pedestrian in the current road point cloud frame based on the point cloud of the target pedestrian in the last road point cloud frame;
and identifying the point cloud of the pedestrian in the search range, and taking the point cloud as the point cloud of the target pedestrian in the extracted current road point cloud frame.
10. The apparatus according to any one of claims 6-9, wherein the determining unit is configured to determine the motion information of the pedestrian as follows:
and calculating the moving speed of the central point as the moving speed of the pedestrian based on the moving distance of the central point of the stable region of the pedestrian in the road point cloud frame and the acquisition time of each road point cloud frame.
11. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-5.
12. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-5.
CN201910962342.4A 2019-10-11 2019-10-11 Pedestrian detection method and device Active CN110717918B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910962342.4A CN110717918B (en) 2019-10-11 2019-10-11 Pedestrian detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910962342.4A CN110717918B (en) 2019-10-11 2019-10-11 Pedestrian detection method and device

Publications (2)

Publication Number Publication Date
CN110717918A CN110717918A (en) 2020-01-21
CN110717918B true CN110717918B (en) 2022-06-07

Family

ID=69211421

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910962342.4A Active CN110717918B (en) 2019-10-11 2019-10-11 Pedestrian detection method and device

Country Status (1)

Country Link
CN (1) CN110717918B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111339996B (en) * 2020-03-20 2023-05-09 北京百度网讯科技有限公司 Method, device, equipment and storage medium for detecting static obstacle
CN111428692A (en) * 2020-04-23 2020-07-17 北京小马慧行科技有限公司 Method and device for determining travel trajectory of vehicle, and storage medium
CN111813120A (en) * 2020-07-10 2020-10-23 北京林业大学 Method and device for identifying moving target of robot and electronic equipment
CN113970752A (en) * 2020-07-22 2022-01-25 商汤集团有限公司 Target detection method and device, electronic equipment and storage medium
CN113838112A (en) * 2021-09-24 2021-12-24 东莞市诺丽电子科技有限公司 Trigger signal determining method and trigger signal determining system of image acquisition system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10108867B1 (en) * 2017-04-25 2018-10-23 Uber Technologies, Inc. Image-based pedestrian detection
TWI651686B (en) * 2017-11-30 2019-02-21 國家中山科學研究院 Optical radar pedestrian detection method
CN109949347A (en) * 2019-03-15 2019-06-28 百度在线网络技术(北京)有限公司 Human body tracing method, device, system, electronic equipment and storage medium
CN110033430A (en) * 2019-02-20 2019-07-19 阿里巴巴集团控股有限公司 A kind of pedestrian's quantity statistics method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108256479B (en) * 2018-01-17 2023-08-01 百度在线网络技术(北京)有限公司 Face tracking method and device
CN109309813A (en) * 2018-10-22 2019-02-05 北方工业大学 Intelligent following method suitable for indoor environment and intelligent following robot
CN110263652B (en) * 2019-05-23 2021-08-03 杭州飞步科技有限公司 Laser point cloud data identification method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10108867B1 (en) * 2017-04-25 2018-10-23 Uber Technologies, Inc. Image-based pedestrian detection
TWI651686B (en) * 2017-11-30 2019-02-21 國家中山科學研究院 Optical radar pedestrian detection method
CN110033430A (en) * 2019-02-20 2019-07-19 阿里巴巴集团控股有限公司 A kind of pedestrian's quantity statistics method and device
CN109949347A (en) * 2019-03-15 2019-06-28 百度在线网络技术(北京)有限公司 Human body tracing method, device, system, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pedestrian Detection with Lidar Point Clouds Based on Single Template Matching;Kaiqi Liu等;《Electronics》;20190711;第1-19页 *
基于激光扫描技术的行人检测方法研究;张志刚 等;《计算机科学》;20160731;第43卷(第7期);第328-331页 *

Also Published As

Publication number Publication date
CN110717918A (en) 2020-01-21

Similar Documents

Publication Publication Date Title
CN110717918B (en) Pedestrian detection method and device
US11776155B2 (en) Method and apparatus for detecting target object in image
CN110019570B (en) Map construction method and device and terminal equipment
US10817748B2 (en) Method and apparatus for outputting information
JP2021516355A (en) Map compartment system for self-driving cars
JP2021515254A (en) Real-time map generation system for self-driving cars
JP2021515282A (en) Real-time map generation system for self-driving cars
CN110785719A (en) Method and system for instant object tagging via cross temporal verification in autonomous vehicles
JP2021516183A (en) Point cloud ghost effect detection system for self-driving cars
CN110869559A (en) Method and system for integrated global and distributed learning in autonomous vehicles
CN110753953A (en) Method and system for object-centric stereo vision in autonomous vehicles via cross-modality verification
US11724721B2 (en) Method and apparatus for detecting pedestrian
US9971402B2 (en) Information processing system, mobile terminal, server apparatus, method for processing information, and non-transitory computer readable storage medium
CN110654381A (en) Method and device for controlling a vehicle
CN110696826B (en) Method and device for controlling a vehicle
CN112622923B (en) Method and device for controlling a vehicle
CN113392793A (en) Method, device, equipment, storage medium and unmanned vehicle for identifying lane line
CN110654380A (en) Method and device for controlling a vehicle
CN112558036B (en) Method and device for outputting information
CN115512336B (en) Vehicle positioning method and device based on street lamp light source and electronic equipment
CN116823884A (en) Multi-target tracking method, system, computer equipment and storage medium
CN115675528A (en) Automatic driving method and vehicle based on similar scene mining
CN112614156A (en) Training method and device for multi-target tracking network model and related equipment
CN112668371A (en) Method and apparatus for outputting information
CN115431968B (en) Vehicle controller, vehicle and vehicle control method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant