CN109740464B - Target identification following method - Google Patents

Target identification following method Download PDF

Info

Publication number
CN109740464B
CN109740464B CN201811574550.9A CN201811574550A CN109740464B CN 109740464 B CN109740464 B CN 109740464B CN 201811574550 A CN201811574550 A CN 201811574550A CN 109740464 B CN109740464 B CN 109740464B
Authority
CN
China
Prior art keywords
following
suspicious object
features
suspicious
matched
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811574550.9A
Other languages
Chinese (zh)
Other versions
CN109740464A (en
Inventor
张德兆
王肖
张放
李晓飞
霍舒豪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Idriverplus Technologies Co Ltd
Original Assignee
Beijing Idriverplus Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Idriverplus Technologies Co Ltd filed Critical Beijing Idriverplus Technologies Co Ltd
Priority to CN201811574550.9A priority Critical patent/CN109740464B/en
Publication of CN109740464A publication Critical patent/CN109740464A/en
Application granted granted Critical
Publication of CN109740464B publication Critical patent/CN109740464B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The invention provides a target identification following method, which comprises the following steps: acquiring video data around a vehicle; carrying out feature extraction on single-frame/or multi-frame image information, and determining features to be matched; matching the suspected face features with face features in a preset first feature library to determine a first matching degree; and/or matching the suspicious object characteristics with the object characteristics in a preset second characteristic library, determining a second matching degree and determining a suspicious object; determining the state and the path of the suspicious object according to the image information corresponding to the suspicious object; when the suspicious object is in a moving state, sending a following command to a following device on the vehicle so that the following device follows the suspicious object; the following command comprises the suspected human face features successfully matched in the features to be matched and/or the suspected article features and paths successfully matched. Therefore, the suspicious object can be followed without influencing the current work.

Description

Target identification following method
Technical Field
The invention relates to the technical field of security protection, in particular to a target identification following method.
Background
In the prior art, in order to perform security protection, face recognition is often performed through arranging a camera and data collected by the camera, so that abnormal personnel are recognized. However, the method has the defects of huge cost, dead angle monitoring and the like.
In some fields, the following device is used for following the followed object with the positioning chip by arranging the positioning chip on the followed object, but the following device follows a specific target, and the following target is generally set manually, so the application range is limited.
Therefore, how to automatically identify and follow the following target in a wide range becomes a problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention aims to provide a target identification following method, so as to solve the problem that the following application range is relatively limited in the prior art.
In order to solve the above problem, the present invention provides a method for identifying and following an object, the method comprising:
acquiring video data around a vehicle;
processing the video data to determine single-frame/or multi-frame image information;
extracting the characteristics of the single-frame/or multi-frame image information, and determining the characteristics to be matched; the features to be matched comprise suspected human face features and/or suspected article features to be matched;
matching the suspected face features with face features in a preset first feature library to determine a first matching degree; and/or matching the suspicious article characteristics with article characteristics in a preset second characteristic library to determine a second matching degree;
when the first matching degree is larger than a preset first matching threshold value and/or when the second matching degree is larger than a preset second matching threshold value, determining a suspicious object;
determining the state and the path of the suspicious object according to the image information corresponding to the suspicious object;
when the suspicious object is in a moving state, sending a following command to a following device on a vehicle so that the following device follows the suspicious object; the following command comprises suspected human face features successfully matched in the features to be matched and/or suspected object features successfully matched in the features to be matched and the path.
In a possible implementation manner, when the state of the suspicious object is a moving state, the sending a follow-up command to a follow-up device on a vehicle specifically includes:
when the number of the suspicious objects is multiple, determining the number of the following devices according to the state and the path of each suspicious object in the multiple suspicious objects;
sending the following commands with the same number as the following devices to the corresponding following devices; each following command comprises the suspect face feature which is successfully matched in a suspect object and/or the suspect article feature which is successfully matched and the path of the suspect object.
In one possible implementation, the method further includes:
acquiring position information of a vehicle;
determining map data according to the position information of the vehicle;
determining following mode selection information according to the map data; the following mode selection information is fixed following or random following;
and sending the following mode selection information to the following device so that the following device follows according to the following mode selection information.
In one possible implementation, the method further includes:
and when the state of the suspicious object is in a static state, sending the suspected human face features successfully matched in the features to be matched and/or the suspected article features successfully matched in the features to be matched, and the features of the suspicious object in the first feature library and/or the second feature library corresponding to the suspected human face features and/or the suspected article features to a server.
In a possible implementation manner, before the performing the feature extraction on the single-frame/or multi-frame image information and determining the feature to be matched, the method further includes:
segmenting and tracking the laser point cloud data to obtain a point cloud segmentation result;
processing the point cloud segmentation result to obtain a pedestrian/human body contour;
and matching the pedestrian/human body outline with the image information on a time axis.
In a possible implementation manner, before determining the state and the path of the suspicious object according to the image information corresponding to the suspicious object, the method further includes:
sending the suspect face feature successfully matched and/or the suspect object feature successfully matched in the features to be matched to a server so as to enable the server to carry out secondary matching;
and when the matching is successful, receiving a matching success message sent by the server.
In a possible implementation mode, processing each frame of image information to acquire environmental data in each frame of image information;
fitting the environmental data and preset map data, and determining a plurality of position information of the suspicious object according to a fitting result;
and splicing the plurality of position information to determine the path of the suspicious object.
In one possible implementation, the method further includes:
receiving real-time image information of the suspicious object sent by the following device;
calculating the position information of the suspicious object according to the real-time image information;
receiving the position information of the following device sent by the following device;
judging whether the suspicious object is in a following range or not according to the real-time image information and the position information;
and when the suspicious object is out of the following range, sending the real-time position information of the suspicious object and the position information of the following device to a server.
By applying the identification following method of the target provided by the invention, in the unmanned equipment, after the suspicious object is determined by using the video data, the following device on the vehicle is used for following the suspicious object, whether the suspicious object is in the following range can be judged, and whether the suspicious object is in the following range or not is processed in different ways, so that the following of the suspicious object is realized when the current work is not influenced.
Drawings
Fig. 1 is a schematic flow chart of a target identification following method according to an embodiment of the present invention.
Detailed Description
The present application will be described in further detail with reference to the following drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be further noted that, for the convenience of description, only the portions related to the related invention are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
Fig. 1 is a schematic flow chart of a target identification following method according to an embodiment of the present invention. The execution subject of the method may be a control unit of an autonomous vehicle. A vehicle control unit may be understood as a control module for controlling the travel of a vehicle. The control unit is a data processing center of the unmanned vehicle and can perform autonomous decision, path planning and the like. The identification following method is applied to an unmanned scene, in particular to an unmanned vehicle. Therefore, the data of the unmanned equipment can be utilized, and the urban security and protection cost can be saved.
As shown in fig. 1, the method comprises the steps of:
step 101, video data around a vehicle is acquired.
Specifically, in order to save human resources, the automatic driving vehicle can be utilized for cleaning, and when the automatic cleaning is carried out, the structure of the vehicle can be utilized to take into account the safety of the road section for inspection. By way of example and not limitation, at some particular time period, such as, 00: 00-5: 00 this time quantum that the pedestrian is less cleans the work, when the cleaning cart cleans, can patrol and examine the highway section of cleaning, 12: 00-16: in the time period of 00, when the vehicle performs cleaning work in a public place, the environment of the public place can be photographed by using the structure of the vehicle.
Specifically, the vehicle is provided with a collecting device which can be a binocular camera. The binocular camera can be used for acquiring video information of a road section where the vehicle passes, processing the video data, and extracting single-frame/or multi-frame image information from the video data. Each frame of image information includes time information.
And 102, processing the video data and determining single-frame/multi-frame image information.
103, extracting the characteristics of single-frame/or multi-frame image information, and determining the characteristics to be matched; the features to be matched comprise suspected human face features and/or suspected object features to be matched.
Specifically, the vehicle control unit may process the video data to extract image information therefrom. And processing the image information through a face recognition algorithm to extract a plurality of characteristics.
The features may be human face features to be matched, such as suspected human face features of an implementer who forcibly holds a child or human face features of a public place; the characteristic may also be a suspicious item characteristic, which may be a control instrument or a package being carried away, etc.
In which, the suspected human face feature and the suspected object feature may exist in the image information at the same time.
104, matching the suspected face features with face features in a preset first feature library to determine a first matching degree; and/or matching the suspicious article characteristics with the article characteristics in a preset second characteristic library to determine a second matching degree.
Furthermore, besides installing binocular cameras on the vehicle, various sensors are also installed, for example, a laser radar can collect laser point cloud data. The contour of the suspicious object, the contour of the pedestrian or the human body can be determined through the laser point cloud data, and the contour of the suspicious object, the contour of the pedestrian or the human body and the feature to be matched in the image information are matched, so that the accuracy of the image is further improved.
And 105, when the first matching degree is greater than a preset first matching threshold value and/or when the second matching degree is greater than a preset second matching threshold value, determining the suspicious object.
Specifically, by way of example and not limitation, the first matching threshold may be set to 90% when performing face matching, and the second matching threshold may be suspect set to 95% when performing suspect item matching. The suspicious object may be a suspicious object successfully matched, or may be a suspicious face, such as a face in a public environment or a face of a suspicious behavior implementer in a suspicious behavior.
Further, in order to improve the matching accuracy, after the matching is successful, secondary matching may be performed, for example, the image information that is successfully matched currently, the suspicious object in the suspicious object feature library and/or the suspicious person in the suspicious image feature library may be matched with another more accurate feature library in the vehicle, or sent to a server, and the server performs matching, and determines the suspicious object after the secondary matching is successful. Therefore, the accuracy of the determined suspicious object is improved through secondary matching.
And step 106, determining the state and the path of the suspicious object according to the image information corresponding to the suspicious object.
Specifically, the position information of the suspicious object may be obtained by processing the acquired single-frame/or multi-frame image information.
Each frame of image information can be processed first, and environmental data in each frame of image information is acquired;
and fitting the environmental data and preset map data, and determining a plurality of position information of the suspicious object according to the fitting result.
The image information includes environmental data such as building identification, traffic identification, road identification, and the like.
After the environmental data and the map data are fitted, the same characteristics of the environmental data and the map data can be comprehensively processed, and the position information of the suspicious object can be calculated.
Because each frame of image information also comprises the time information for acquiring the image information, a plurality of pieces of position information can be spliced according to the time information to generate a path.
Wherein, the state of the suspicious object can be determined according to the position information or the path, and the suspicious object comprises a moving state and a static state.
Step 107, when the state of the suspicious object is a moving state, sending a following command to a following device on the vehicle so that the following device follows the suspicious object; the following command comprises suspected human face features successfully matched in the features to be matched and/or suspected object features and paths successfully matched.
Wherein step 107 comprises:
when the number of the suspicious objects is multiple, determining the number of the following devices according to the state and the path of each suspicious object in the multiple suspicious objects;
sending the following commands with the same number as the following devices to the corresponding following devices; each follow-up command comprises the suspect face feature successfully matched and/or the suspect article feature successfully matched and the path of the suspect object in the suspect object.
Specifically, for example and without limitation, the following device may be a robot dog, the number of the following devices may be determined according to the number of suspicious objects, for example, when there are two suspicious persons, if the states of the two suspicious persons a and B are both moving states and the paths are not the same, the number of the following devices may be determined to be 2, and the following devices 1 and 2 are respectively a following device 1 and a following device 2, a first following command may be sent to the following device 1, the first following command may include a face feature of the suspicious person a that is successfully matched and a path of the suspicious person a, and a second following command may be sent to the following device 2, and the second following command may include a face feature of the suspicious person B that is successfully matched and a path of the suspicious person B. After a suspicious target is determined by a vehicle, such as a cleaning vehicle, the current cleaning work can be continued, and the following device can follow the suspicious target by sending a following command to the following device on the vehicle, so that the following device can follow the suspicious target without influencing the current cleaning work, and each suspicious object can be followed. Further, the method further comprises:
acquiring position information of a vehicle;
determining map data according to the position information of the vehicle;
determining following mode selection information according to the map data; the following mode selection information is fixed following or random following;
and sending the following mode selection information to the following device so that the following device follows according to the following mode selection information.
Specifically, the position information of the vehicle itself may be acquired by a Positioning module on the vehicle, such as a Global Positioning System (GPS). The position information can also be obtained by sending the query message to the server and analyzing the response message carrying the position information sent by the server.
When a vehicle is at a certain location, a map of the location may be loaded, for example, when the vehicle is on street a, a map of city a, which is the upper unit of street a, may be loaded. How to load the vehicle may be downloading from a server or loading in advance by the vehicle, which is not limited in the present application.
The position information comprises longitude and latitude data, driving direction information and time information.
The control unit automatically analyzes the terrain in the map data, for example, the tracking difficulty can be analyzed, the tracking difficulty is matched with a prestored difficulty table, and the following mode is automatically selected. For example, carry out the analysis to map data, the position that is located at present is the plain, and the road is flat, and the building is few, and the tracking degree of difficulty is 50%, and in the difficulty table, the following mode that this degree of difficulty corresponds is fixed following, then output fixed following, and is follow-up, can send the following mode for following the device to make the following device adopt the mode of fixed following to follow. The position that locates at present is that the slope is big, and buckles the road a lot, and the street that the building is also many, and the tracking degree of difficulty is 70%, then through seeking the difficulty table, and the following mode that this degree of difficulty corresponds is followed for random, then the output is followed at random, and is follow-up, can send the following mode for following the device to the mode that makes the following device adopt to follow at random follows.
When the fixed follow-up is adopted, the follow-up device follows the suspicious object in real time, and when the random follow-up is adopted, the follow-up device can predict and follow the track of the suspicious object.
Further, the vehicle may determine position information of a plurality of follower devices within a range that is detectable by the sensing module via the sensing module on the vehicle. When multiple following devices are beyond this range, the vehicle may determine location information of the following devices and the suspicious object by interacting with the following devices.
Specifically, when the following device is used for following, the real-time image information sent by the following device can be received and processed to calculate the position information of the suspicious object, and the position information of the following device itself sent by the following device can be acquired, and judgment is made according to the position information of the suspicious object and the position information of the following device to determine whether the suspicious object is in the following range of the following device, for example, a following range of 500m may be set, and when within the following range, the position information and the real-time image information may be stored, when out of the following range, the position information and the real-time image information may be sent to a server, the server may be a third party server, whereby the status of the following device and the status of the suspicious object are sent to the third party server for alerting.
The following device may be provided with an Inertial Measurement Unit (IMU) for obtaining position information of the following device, and the following device may acquire image information of the suspicious object by using an image acquisition device thereon and calculate the position information of the suspicious object by using the image information, or may also send the image information to a vehicle, and calculate the position information of the suspicious object by using the vehicle.
It can be understood that sensors such as an ultrasonic radar can be mounted on the following device, and the distance between the suspicious object and the following target can be calculated by using an ultrasonic distance measurement mode to judge whether the suspicious object is in the following range.
Further, when the state of the suspicious object is in a static state, the suspicious face features successfully matched in the features to be matched and/or the suspicious object features successfully matched in the features to be matched, and the features of the suspicious object in the first feature library and/or the second feature library corresponding to the suspicious face features are sent to the server.
Wherein the state comprises a moving state or a static state. And determining whether the suspicious object is in a moving state or a static state according to the time information of each frame of image. When the suspicious object is in a static state, the information of the two matching times and the acquired image information can be directly reported to the server.
The server in the above steps may also be a third-party server, and the third-party server may be a server of some organization, for example, a server of a management organization that manages the missing person. Therefore, the third-party server can conveniently utilize the reported information to perform security work. The security protection cost is saved, the security protection range is expanded, and the security protection can be performed even in the area where the cameras are not arranged.
It will be appreciated that the method may also be applied to other mobile devices, by way of example and not limitation, the vehicle may be replaced by a robot, which may be a robot that performs tasks such as handling.
By applying the identification following method of the target provided by the invention, in the unmanned equipment, after the suspicious object is determined by using the video data, the following device on the vehicle is used for following the suspicious object, whether the suspicious object is in the following range can be judged, and whether the suspicious object is in the following range or not is processed in different ways, so that the following of the suspicious object is realized when the current work is not influenced.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above embodiments are provided to further explain the objects, technical solutions and advantages of the present invention in detail, it should be understood that the above embodiments are merely exemplary embodiments of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A method of identification following of an object, the method comprising:
the control unit of the automatic driving vehicle acquires video data around the automatic driving vehicle when the automatic driving vehicle performs cleaning work;
processing the video data to determine single-frame/or multi-frame image information;
extracting the characteristics of the single-frame/or multi-frame image information, and determining the characteristics to be matched; the features to be matched comprise suspected human face features and/or suspected article features to be matched;
matching the suspected face features with face features in a preset first feature library to determine a first matching degree; and/or matching the suspicious article characteristics with article characteristics in a preset second characteristic library to determine a second matching degree;
when the first matching degree is larger than a preset first matching threshold value and/or when the second matching degree is larger than a preset second matching threshold value, determining a suspicious object;
determining the state and the path of the suspicious object according to the image information corresponding to the suspicious object;
when the suspicious object is in a moving state, the automatic driving vehicle continues cleaning and sends a following command to a following device on the vehicle so that the following device follows the suspicious object; the following command comprises suspected face features successfully matched in the features to be matched and/or suspected object features successfully matched and the path;
wherein the method further comprises:
acquiring position information of a vehicle;
determining map data according to the position information of the vehicle;
determining tracking difficulty according to the map data, matching the tracking difficulty with a prestored difficulty table, and determining following mode selection information; the following mode selection information is fixed following or random following; when the fixed follow-up is adopted, the follow-up device follows the suspicious object in real time, and when the random follow-up is adopted, the follow-up device can predict and follow the track of the suspicious object;
and sending the following mode selection information to the following device so that the following device follows according to the following mode selection information.
2. The method according to claim 1, wherein when the suspicious object is in a moving state, sending a follow-up command to a follow-up device on a vehicle includes:
when the number of the suspicious objects is multiple, determining the number of the following devices according to the state and the path of each suspicious object in the multiple suspicious objects;
sending the following commands with the same number as the following devices to the corresponding following devices; each following command comprises the suspect face feature which is successfully matched in a suspect object and/or the suspect article feature which is successfully matched and the path of the suspect object.
3. The method of claim 1, further comprising:
and when the state of the suspicious object is in a static state, sending the suspected human face features successfully matched in the features to be matched and/or the suspected article features successfully matched in the features to be matched, and the features of the suspicious object in the first feature library and/or the second feature library corresponding to the suspected human face features and/or the suspected article features to a server.
4. The method according to claim 1, wherein before the extracting the features of the single-frame or multi-frame image information and determining the features to be matched, the method further comprises:
segmenting and tracking the laser point cloud data to obtain a point cloud segmentation result;
processing the point cloud segmentation result to obtain a pedestrian/human body contour;
and matching the pedestrian/human body outline with the image information on a time axis.
5. The method according to claim 1, wherein before determining the state and the path of the suspicious object according to the image information corresponding to the suspicious object, the method further comprises:
sending the suspect face feature successfully matched and/or the suspect object feature successfully matched in the features to be matched to a server so as to enable the server to carry out secondary matching;
and when the matching is successful, receiving a matching success message sent by the server.
6. The method of claim 1, wherein each frame of image information is processed to obtain environmental data in each frame of image information;
fitting the environmental data and preset map data, and determining a plurality of position information of the suspicious object according to a fitting result;
and splicing the plurality of position information to determine the path of the suspicious object.
7. The method of claim 1, further comprising:
receiving real-time image information of the suspicious object sent by the following device;
calculating the position information of the suspicious object according to the real-time image information;
receiving the position information of the following device sent by the following device;
judging whether the suspicious object is in a following range or not according to the real-time image information and the position information;
and when the suspicious object is out of the following range, sending the real-time position information of the suspicious object and the position information of the following device to a server.
CN201811574550.9A 2018-12-21 2018-12-21 Target identification following method Active CN109740464B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811574550.9A CN109740464B (en) 2018-12-21 2018-12-21 Target identification following method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811574550.9A CN109740464B (en) 2018-12-21 2018-12-21 Target identification following method

Publications (2)

Publication Number Publication Date
CN109740464A CN109740464A (en) 2019-05-10
CN109740464B true CN109740464B (en) 2021-01-26

Family

ID=66359512

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811574550.9A Active CN109740464B (en) 2018-12-21 2018-12-21 Target identification following method

Country Status (1)

Country Link
CN (1) CN109740464B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393265B (en) * 2021-05-25 2023-04-25 浙江大华技术股份有限公司 Feature library construction method for passing object, electronic device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1761815A1 (en) * 2004-06-21 2007-03-14 Koninklijke Philips Electronics N.V. Aberration correction for spectroscopic analysis
CN106354161A (en) * 2016-09-26 2017-01-25 湖南晖龙股份有限公司 Robot motion path planning method
CN107085938A (en) * 2017-06-08 2017-08-22 中南大学 A kind of fault-tolerant planing method of intelligent driving local path followed based on lane line and GPS
CN108931991A (en) * 2018-08-30 2018-12-04 王瑾琨 The automatic follower method of mobile vehicle and has and follow barrier avoiding function mobile vehicle automatically

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6815724B2 (en) * 2015-11-04 2021-01-20 トヨタ自動車株式会社 Autonomous driving system
CN105686766A (en) * 2016-04-14 2016-06-22 京东方科技集团股份有限公司 Cleaning robot and working method for cleaning robot
CN106155065A (en) * 2016-09-28 2016-11-23 上海仙知机器人科技有限公司 A kind of robot follower method and the equipment followed for robot
CN206411817U (en) * 2016-12-06 2017-08-15 山东康威通信技术股份有限公司 A kind of prison circumference makes an inspection tour warning robot system
CN106779857A (en) * 2016-12-23 2017-05-31 湖南晖龙股份有限公司 A kind of purchase method of remote control robot
CN106774345B (en) * 2017-02-07 2020-10-30 上海仙软信息科技有限公司 Method and equipment for multi-robot cooperation
CN108496141B (en) * 2017-06-30 2021-11-12 深圳市大疆创新科技有限公司 Method for controlling following of movable equipment, control equipment and following system
CN107807652A (en) * 2017-12-08 2018-03-16 灵动科技(北京)有限公司 Merchandising machine people, the method for it and controller and computer-readable medium
CN208255717U (en) * 2017-12-08 2018-12-18 灵动科技(北京)有限公司 Merchandising machine people
CN108334098B (en) * 2018-02-28 2018-09-25 弗徕威智能机器人科技(上海)有限公司 A kind of human body follower method based on multisensor
CN108733280A (en) * 2018-03-21 2018-11-02 北京猎户星空科技有限公司 Focus follower method, device, smart machine and the storage medium of smart machine
CN108673501B (en) * 2018-05-17 2022-06-07 中国科学院深圳先进技术研究院 Target following method and device for robot

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1761815A1 (en) * 2004-06-21 2007-03-14 Koninklijke Philips Electronics N.V. Aberration correction for spectroscopic analysis
CN106354161A (en) * 2016-09-26 2017-01-25 湖南晖龙股份有限公司 Robot motion path planning method
CN107085938A (en) * 2017-06-08 2017-08-22 中南大学 A kind of fault-tolerant planing method of intelligent driving local path followed based on lane line and GPS
CN108931991A (en) * 2018-08-30 2018-12-04 王瑾琨 The automatic follower method of mobile vehicle and has and follow barrier avoiding function mobile vehicle automatically

Also Published As

Publication number Publication date
CN109740464A (en) 2019-05-10

Similar Documents

Publication Publication Date Title
CN109686031B (en) Identification following method based on security
CN109740462B (en) Target identification following method
CN109583415B (en) Traffic light detection and identification method based on fusion of laser radar and camera
JP6230673B2 (en) Traffic signal map creation and detection
CN113240909B (en) Vehicle monitoring method, equipment, cloud control platform and vehicle road cooperative system
CN109345829B (en) Unmanned vehicle monitoring method, device, equipment and storage medium
CN111291697B (en) Method and device for detecting obstacles
US11371851B2 (en) Method and system for determining landmarks in an environment of a vehicle
US10950125B2 (en) Calibration for wireless localization and detection of vulnerable road users
CN109740461B (en) Object and subsequent processing method
CN112383756B (en) Video monitoring alarm processing method and device
CN115366885A (en) Method for assisting a driving maneuver of a motor vehicle, assistance device and motor vehicle
CN115294544A (en) Driving scene classification method, device, equipment and storage medium
CN114360261B (en) Vehicle reverse running identification method and device, big data analysis platform and medium
Kandoi et al. Pothole detection using accelerometer and computer vision with automated complaint redressal
CN109740464B (en) Target identification following method
US11512969B2 (en) Method for ascertaining in a backend, and providing for a vehicle, a data record, describing a landmark, for the vehicle to determine its own position
CN114241373A (en) End-to-end vehicle behavior detection method, system, equipment and storage medium
CN114264310B (en) Positioning and navigation method, device, electronic equipment and computer storage medium
CN117953645A (en) Intelligent security early warning method and system for park based on patrol robot
CN117818659A (en) Vehicle safety decision method and device, electronic equipment, storage medium and vehicle
CN109344776B (en) Data processing method
Hardjono et al. Virtual Detection Zone in smart phone, with CCTV, and Twitter as part of an Integrated ITS
Govada et al. Road deformation detection
KR102188567B1 (en) System for monitoring the road using 3 dimension laser scanner

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee after: Beijing Idriverplus Technology Co.,Ltd.

Address before: B4-006, maker Plaza, 338 East Street, Huilongguan town, Changping District, Beijing 100096

Patentee before: Beijing Idriverplus Technology Co.,Ltd.