CN110059547B - Target detection method and device - Google Patents

Target detection method and device Download PDF

Info

Publication number
CN110059547B
CN110059547B CN201910176719.3A CN201910176719A CN110059547B CN 110059547 B CN110059547 B CN 110059547B CN 201910176719 A CN201910176719 A CN 201910176719A CN 110059547 B CN110059547 B CN 110059547B
Authority
CN
China
Prior art keywords
frame
candidate
relevant part
pedestrian
candidate frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910176719.3A
Other languages
Chinese (zh)
Other versions
CN110059547A (en
Inventor
熊峰
张弘楷
李伯勋
俞刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910176719.3A priority Critical patent/CN110059547B/en
Publication of CN110059547A publication Critical patent/CN110059547A/en
Application granted granted Critical
Publication of CN110059547B publication Critical patent/CN110059547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The disclosure discloses a target detection method, a target detection device, an electronic device and a computer-readable storage medium, wherein the method comprises the following steps: regressing a pedestrian candidate frame and a first candidate frame of a relevant part of the pedestrian according to the first anchor frame; regressing a second candidate frame of the relevant part of the pedestrian according to a second anchor frame; and determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant part and the second candidate frame of the relevant part. According to the method and the device, the pedestrian candidate frame and the related part candidate frame are regressed through different anchor frames, the matching degree of the related part candidate frame can be improved, and the positioning accuracy is improved.

Description

Target detection method and device
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a target detection method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Pedestrian detection has extensive application in security protection and autopilot field, and its aim at detects the pedestrian from the background. Pedestrian detection is also the basis for many other tasks, such as pedestrian re-identification, pedestrian tracking, and human keypoint detection. In security and automatic driving scenes, a combined pedestrian detection system is provided with a view to solving the pedestrian crowding and shielding conditions, and meanwhile, the detection is performed on people and relevant parts of people, so that the pedestrian detection precision is greatly improved.
In the prior art, a combined pedestrian detection system regresses offsets of relevant parts of a person and a person through the same anchor frame, so as to obtain a corresponding candidate frame. Generally speaking, the anchor frame is designed in advance for the human frame, and in this case, the anchor frame and the human relevant part frame may not match, which may greatly increase the difficulty of human relevant part regression, and finally lead to inaccurate positioning of the relevant part candidate frame.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a target detection method, an apparatus, an electronic device, and a computer-readable storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a target detection method, including:
regressing a pedestrian candidate frame and a first candidate frame of a relevant part of the pedestrian according to the first anchor frame;
regressing a second candidate frame of the relevant part of the pedestrian according to a second anchor frame;
and determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant part and the second candidate frame of the relevant part.
Further, the determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant portion, and the second candidate frame of the relevant portion includes:
determining a final related part candidate frame according to the first candidate frame of the related part and the second candidate frame of the related part;
and determining a pedestrian detection frame according to the pedestrian candidate frame and the final relevant part candidate frame.
Further, the determining a final frame candidate of the relevant part according to the first frame candidate of the relevant part and the second frame candidate of the relevant part includes:
calculating the intersection ratio of the second candidate frame of the relevant part and the first candidate frame of the relevant part;
and determining a final related part candidate frame according to the intersection ratio.
Further, the determining a final relevant part candidate frame according to the intersection ratio includes:
when the number of the second candidate frames of the relevant part is one, if the intersection ratio is greater than a preset threshold value, taking the second candidate frame of the relevant part as a final relevant part candidate frame; or the like, or, alternatively,
and when the number of the second candidate frames of the relevant part is multiple, selecting the second candidate frame of the relevant part with the largest intersection ratio as a relevant part target candidate frame, and if the intersection ratio is larger than a preset threshold value, taking the relevant part target candidate frame as a final relevant part candidate frame.
Further, the determining a pedestrian detection frame according to the pedestrian candidate frame and the final relevant part candidate frame includes:
and replacing the first frame candidate of the relevant part with the final frame candidate of the relevant part, thereby determining a pedestrian detection frame.
According to a second aspect of the embodiments of the present disclosure, there is provided an object detection apparatus including:
the pedestrian frame determining module is used for regressing a pedestrian candidate frame and a first candidate frame of a relevant part of the pedestrian according to the first anchor frame;
a relevant part frame determining module, configured to regress a second candidate frame of the relevant part of the pedestrian according to a second anchor frame;
and the pedestrian detection frame determination module is used for determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant part and the second candidate frame of the relevant part.
Further, the pedestrian detection frame determination module includes:
a relevant part frame determining unit configured to determine a final relevant part frame candidate based on the first frame candidate of the relevant part and the second frame candidate of the relevant part;
a pedestrian detection frame determination unit configured to determine a pedestrian detection frame from the pedestrian candidate frame and the final relevant part candidate frame.
Further, the relevant portion frame determining unit is specifically configured to: calculating the intersection ratio of the second candidate frame of the relevant part and the first candidate frame of the relevant part; and determining a final related part candidate frame according to the intersection ratio.
Further, the relevant portion frame determining unit is specifically configured to: when the number of the second candidate frames of the relevant part is one, if the intersection ratio is greater than a preset threshold value, taking the second candidate frame of the relevant part as a final relevant part candidate frame; or, when the number of the second candidate frames of the relevant portion is multiple, selecting the second candidate frame of the relevant portion with the largest intersection ratio as a relevant portion target candidate frame, and if the intersection ratio is greater than a preset threshold value, taking the relevant portion target candidate frame as a final relevant portion candidate frame.
Further, the pedestrian detection frame determination unit is specifically configured to: and replacing the first frame candidate of the relevant part with the second frame candidate of the relevant part, thereby determining a pedestrian detection frame.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing processor-executable instructions; wherein the processor is configured to: any one of the target detection methods described in the embodiments of the present invention is performed.
According to a fourth aspect of the embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium, wherein instructions of the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform any one of the object detection methods described in the embodiments of the present disclosure.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: through different anchor frames, the pedestrian candidate frame and the related part candidate frame are regressed, the matching degree of the related part candidate frame can be improved, and the positioning accuracy is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of a target detection method according to an embodiment of the present disclosure.
Fig. 2 is a flowchart of a target detection method provided in the second embodiment of the present disclosure.
Fig. 3a is a flowchart of a target detection method provided in the third embodiment of the present disclosure.
Fig. 3b is a schematic diagram of a candidate frame determined by a first anchor frame in a target detection method according to a third embodiment of the present disclosure.
Fig. 3c is a schematic diagram of a candidate frame determined by a second anchor frame in a target detection method according to a third embodiment of the present disclosure.
Fig. 3d is a schematic diagram of a candidate frame after pedestrian detection in a target detection method according to a third embodiment of the disclosure.
Fig. 4 is a block diagram of a target detection apparatus according to a fourth embodiment of the present disclosure.
Fig. 5 is a block diagram of an electronic device according to a fifth embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Example one
Fig. 1 is a flowchart of an object detection method according to an embodiment of the present disclosure, where an execution main body of the object detection method according to the embodiment of the present disclosure may be an object detection device provided in the embodiment of the present disclosure, and the device may be integrated in a mobile terminal (e.g., a smart phone, a tablet computer, etc.), a notebook, or a fixed terminal (a desktop computer), and the object detection device may be implemented by hardware or software. As shown in fig. 1, the method comprises the following steps:
in step S11, a pedestrian frame candidate and a first frame candidate for a relevant part of the pedestrian are regressed based on the first anchor frame.
In this document, in order to distinguish between different anchor frames, an anchor frame that occurs first is referred to herein as a first anchor frame, and an anchor frame that occurs subsequently is referred to herein as a second anchor frame.
The first anchor frame is an anchor frame designed in advance for a person and used for a pedestrian regression candidate frame and a first candidate frame for regression of a relevant part of the pedestrian.
The pedestrian candidate frame is used for acquiring a human body image of a pedestrian.
Specifically, first, a binding relationship between a person and a related part is established, and in this step, different objects, such as the person and the related part, are regressed by using the same anchor frame. Because the two candidate frames are based on the same anchor frame, it can be determined that the two candidate frames are in a binding relationship, that is, a certain part is a part of a certain person.
In this document, in order to distinguish between different related part candidate frames, a related part candidate frame that appears first is referred to herein as a first candidate frame of a related part, and related part candidate frames that appear subsequently are referred to in turn as second candidate frames of a related part.
And step S12, a second candidate frame of the relevant part of the pedestrian is regressed according to a second anchor frame.
Wherein the second anchor frame is an anchor frame designed in advance for the relevant part of the person and used for returning the relevant part candidate frame of the person.
The relevant parts can be part of the characteristics of the human body, including the human head, the human face, the human legs, the human arms and the like. The relevant part candidate frame is used to acquire an image of the relevant part.
Specifically, firstly, a binding relationship between a pedestrian and a pedestrian related part is established, and then a pedestrian candidate frame and a related part candidate frame are regressed by adopting a first anchor frame and a second anchor frame respectively.
In step S13, a pedestrian detection frame is determined from the pedestrian candidate frame, the first candidate frame of the relevant portion, and the second candidate frame of the relevant portion.
In the embodiment, the pedestrian candidate frame and the related part candidate frame are regressed through different anchor frames, so that the matching degree of the related part candidate frame can be improved, and the positioning accuracy is improved.
Example two
Fig. 2 is a flowchart of an object detection method provided in the second embodiment of the present disclosure, and in this embodiment, based on the above embodiments, a pair of pedestrian and pedestrian related part candidate frames formed by the pedestrian candidate frame and the second candidate frame of the related part is further optimized, and as shown in fig. 2, the method specifically includes:
in step S21, a pedestrian frame candidate and a first frame candidate of the relevant part of the pedestrian are regressed according to the first anchor frame.
Specifically, first, a binding relationship between a pedestrian and a relevant part of the pedestrian is established, and in this step, different targets, such as a human body and the relevant part, are regressed by using the same anchor frame. Because the two candidate frames are based on the same anchor frame, the two regressed candidate frames can be determined to have a binding relationship, namely, a certain part is a part of a certain pedestrian.
And step S22, a second candidate frame of the relevant part of the pedestrian is regressed according to a second anchor frame.
In step S23, a final frame candidate of the relevant portion is determined based on the first frame candidate of the relevant portion and the second frame candidate of the relevant portion.
In an alternative embodiment, step 23 comprises:
step S231 calculates an intersection ratio of the second frame candidate of the relevant portion and the first frame candidate of the relevant portion.
The intersection ratio is a concept used in target detection, and is an overlapping rate of the second candidate frame of the relevant portion and the first candidate frame of the relevant portion, that is, a ratio of an intersection to a union of the second candidate frame and the first candidate frame.
And step S232, determining a final related part candidate frame according to the intersection ratio.
Further, step S232 includes:
when the number of the second candidate frames of the relevant part is one, if the intersection ratio is greater than a preset threshold value, taking the second candidate frame of the relevant part as a final relevant part candidate frame; or, when the number of the second candidate frames of the relevant portion is multiple, selecting the second candidate frame of the relevant portion with the largest intersection ratio as a relevant portion target candidate frame, and if the intersection ratio is greater than a preset threshold value, taking the relevant portion target candidate frame as a final relevant portion candidate frame.
In step S24, a pedestrian detection frame is determined from the pedestrian candidate frame and the final relevant part candidate frame.
In an alternative embodiment, step S24 includes:
and replacing the first frame candidate of the relevant part with the final frame candidate of the relevant part, thereby determining a pedestrian detection frame.
In the embodiment, the pedestrian candidate frame and the related part candidate frame are regressed through different anchor frames, so that the matching degree of the related part candidate frame can be improved, and the positioning accuracy is improved.
EXAMPLE III
Fig. 3a is a flowchart of a target detection method provided in the third embodiment of the present disclosure, and this embodiment is a specific embodiment and is used to describe the present disclosure in detail. As shown in fig. 3a, the method comprises the following steps:
in step S31, a pedestrian frame candidate and a first frame candidate of the relevant part of the pedestrian are regressed according to the first anchor frame.
And step S32, a second candidate frame of the relevant part of the pedestrian is regressed according to a second anchor frame.
Step S33 is to calculate the intersection ratio between the second frame candidate of the relevant portion and the first frame candidate of the relevant portion.
And step S34, if the intersection ratio is larger than a preset threshold value, replacing the first candidate frame of the relevant part with the second candidate frame of the relevant part, and determining a pedestrian detection frame.
In the following, the present embodiment is exemplified based on the combined pedestrian detection system, and for each first anchor frame pre-designed for a person, the combined pedestrian detection system returns the relevant positions of the pedestrian and the pedestrian to the first anchor frame at the same time, and finally obtains the first candidate frame of the relevant positions of the pedestrian and the pedestrian, as shown in fig. 3 b. Meanwhile, a relevant part detection branch of the pedestrian is additionally added in the embodiment, and the branch is based on a basic single-class target detection system. Specifically, a matched second anchor frame is designed in advance for the relevant part of the pedestrian, and then a second candidate frame of the relevant part of the pedestrian is regressed for the second anchor frame, so that the relevant part candidate frame of the pedestrian with accurate positioning is obtained by the method, as shown in fig. 3 c. Finally, a replacement strategy is adopted for the candidate frames generated by the two branches, and as shown in fig. 3d, a pedestrian detection frame is determined according to the pedestrian candidate frame and the corresponding candidate frame of the relevant part after replacement.
Example four
Fig. 4 is a block diagram of an object detection apparatus according to a fourth embodiment of the disclosure. The device may be integrated in a mobile terminal (e.g., a smart phone, a tablet computer, etc.), a notebook, or a fixed terminal (desktop computer), and the object detection device may be implemented by hardware or software. Referring to fig. 4, the apparatus includes a pedestrian frame determination module 41, a relevant portion frame determination module 42, and a pedestrian detection frame determination module 43; wherein the content of the first and second substances,
the pedestrian frame determination module 41 is configured to regress a pedestrian candidate frame and a first candidate frame regressing a relevant part of the pedestrian according to the first anchor frame;
the relevant part frame determining module 42 is configured to regress a second candidate frame of the relevant part of the pedestrian according to a second anchor frame;
the pedestrian detection frame determination module 43 is configured to determine a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant portion, and the second candidate frame of the relevant portion.
Further, the pedestrian detection frame determination module 43 includes: a relevant part frame determination unit 431 and a pedestrian detection frame determination unit 432; wherein the content of the first and second substances,
a relevant part frame determination unit 431 is configured to determine a final relevant part frame candidate from the first frame candidate of the relevant part and the second frame candidate of the relevant part;
the pedestrian detection frame determination unit 432 is configured to determine a pedestrian detection frame from the pedestrian candidate frame and the final relevant part candidate frame.
Further, the relevant part frame determining unit 431 is specifically configured to: calculating the intersection ratio of the second candidate frame of the relevant part and the first candidate frame of the relevant part; and determining a final related part candidate frame according to the intersection ratio.
Further, the relevant part frame determining unit 431 is specifically configured to: when the number of the second candidate frames of the relevant part is one, if the intersection ratio is greater than a preset threshold value, taking the second candidate frame of the relevant part as a final relevant part candidate frame; or, when the number of the second candidate frames of the relevant portion is multiple, selecting the second candidate frame of the relevant portion with the largest intersection ratio as a relevant portion target candidate frame, and if the intersection ratio is greater than a preset threshold value, taking the relevant portion target candidate frame as a final relevant portion candidate frame.
Further, the pedestrian detection frame determination unit 432 is specifically configured to: and replacing the first frame candidate of the relevant part with the second frame candidate of the relevant part, thereby determining a pedestrian detection frame.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
EXAMPLE five
An embodiment of the present disclosure provides an electronic device, including:
a processor;
a memory for storing processor-executable instructions; wherein the processor is configured to:
regressing a pedestrian candidate frame and a first candidate frame of a relevant part of the pedestrian according to the first anchor frame;
regressing a second candidate frame of the relevant part of the pedestrian according to a second anchor frame;
and determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant part and the second candidate frame of the relevant part.
Further, the determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant portion, and the second candidate frame of the relevant portion includes:
determining a final related part candidate frame according to the first candidate frame of the related part and the second candidate frame of the related part;
and determining a pedestrian detection frame according to the pedestrian candidate frame and the final relevant part candidate frame.
Further, the determining a final frame candidate of the relevant part according to the first frame candidate of the relevant part and the second frame candidate of the relevant part includes:
calculating the intersection ratio of the second candidate frame of the relevant part and the first candidate frame of the relevant part;
and determining a final related part candidate frame according to the intersection ratio.
Further, the determining a final relevant part candidate frame according to the intersection ratio includes:
when the number of the second candidate frames of the relevant part is one, if the intersection ratio is greater than a preset threshold value, taking the second candidate frame of the relevant part as a final relevant part candidate frame; or the like, or, alternatively,
and when the number of the second candidate frames of the relevant part is multiple, selecting the second candidate frame of the relevant part with the largest intersection ratio as a relevant part target candidate frame, and if the intersection ratio is larger than a preset threshold value, taking the relevant part target candidate frame as a final relevant part candidate frame.
Further, the determining a pedestrian detection frame according to the pedestrian candidate frame and the final relevant part candidate frame includes:
and replacing the first frame candidate of the relevant part with the final frame candidate of the relevant part, thereby determining a pedestrian detection frame.
Fig. 5 is a block diagram of an electronic device provided in an embodiment of the present disclosure. For example, the electronic device may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 5, the electronic device may include one or more of the following components: a processing component 502, a memory 504, a power component 506, a multimedia component 508, an audio component 510, an input/output (I/O) interface 512, a sensor component 514, and a communication component 516.
The processing component 502 generally controls overall operation of the electronic device, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 502 may include one or more processors 520 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 502 can include one or more modules that facilitate interaction between the processing component 502 and other components. For example, the processing component 502 can include a multimedia module to facilitate interaction between the multimedia component 508 and the processing component 502.
The memory 504 is configured to store various types of data to support operations at the electronic device. Examples of such data include instructions for any application or method operating on the electronic device, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 504 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 506 provides power to the various components of the electronic device. The power components 506 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for an electronic device.
The multimedia component 508 includes a screen that provides an output interface between the electronic device and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 508 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 510 is configured to output and/or input audio signals. For example, the audio component 510 includes a Microphone (MIC) configured to receive external audio signals when the electronic device is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 504 or transmitted via the communication component 516. In some embodiments, audio component 510 further includes a speaker for outputting audio signals.
The I/O interface 512 provides an interface between the processing component 502 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 514 includes one or more sensors for providing various aspects of status assessment for the electronic device. For example, the sensor assembly 514 may detect an open/closed state of the electronic device, the relative positioning of components, such as a display and keypad of the electronic device, the sensor assembly 514 may detect a change in position of the electronic device or a component of the electronic device, the presence or absence of user contact with the electronic device, orientation or acceleration/deceleration of the electronic device, and a change in temperature of the electronic device. The sensor assembly 514 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 514 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 514 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 516 is configured to facilitate wired or wireless communication between the electronic device and other devices. The electronic device may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 3G), or a combination thereof. In an exemplary embodiment, the communication component 516 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 516 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 504 comprising instructions, executable by the processor 520 of the electronic device to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, an application program, such as the memory 504 including instructions executable by the processor 520 of the electronic device to perform the above-described method, is also provided.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (7)

1. A method of object detection, comprising:
regressing a pedestrian candidate frame and a first candidate frame of a relevant part of the pedestrian according to the first anchor frame;
regressing a second candidate frame of the relevant part of the pedestrian according to a second anchor frame;
determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant part and the second candidate frame of the relevant part, and specifically comprises the following steps:
determining a final related part candidate frame according to the first candidate frame of the related part and the second candidate frame of the related part;
and determining a pedestrian detection frame according to the pedestrian candidate frame and the final relevant part candidate frame.
2. The object detection method according to claim 1, wherein determining a final relevant part candidate frame from the first candidate frame of the relevant part and the second candidate frame of the relevant part includes:
calculating the intersection ratio of the second candidate frame of the relevant part and the first candidate frame of the relevant part;
and determining a final related part candidate frame according to the intersection ratio.
3. The method of claim 2, wherein determining the final relevant site candidate box according to the intersection ratio comprises:
when the number of the second candidate frames of the relevant part is one, if the intersection ratio is greater than a preset threshold value, taking the second candidate frame of the relevant part as a final relevant part candidate frame; or the like, or, alternatively,
and when the number of the second candidate frames of the relevant part is multiple, selecting the second candidate frame of the relevant part with the largest intersection ratio as a relevant part target candidate frame, and if the intersection ratio is larger than a preset threshold value, taking the relevant part target candidate frame as a final relevant part candidate frame.
4. The object detection method according to any one of claims 1 to 3, wherein the determining a pedestrian detection frame from the pedestrian candidate frame and the final relevant part candidate frame includes:
and replacing the first frame candidate of the relevant part with the final frame candidate of the relevant part, thereby determining a pedestrian detection frame.
5. An object detection device, comprising:
the pedestrian frame determining module is used for regressing a pedestrian candidate frame and a first candidate frame of a relevant part of the pedestrian according to the first anchor frame;
a relevant part frame determining module, configured to regress a second candidate frame of the relevant part of the pedestrian according to a second anchor frame;
a pedestrian detection frame determination module for determining a pedestrian detection frame according to the pedestrian candidate frame, the first candidate frame of the relevant portion, and the second candidate frame of the relevant portion;
the pedestrian detection frame determination module includes: a relevant portion frame determination unit and a pedestrian detection frame determination unit; wherein the content of the first and second substances,
the relevant part frame determining unit is used for determining a final relevant part candidate frame according to the first candidate frame of the relevant part and the second candidate frame of the relevant part;
the pedestrian detection frame determination unit is configured to determine a pedestrian detection frame from the pedestrian candidate frame and the final relevant part candidate frame.
6. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions; wherein the processor is configured to: performing the object detection method of any one of claims 1-4.
7. A non-transitory computer readable storage medium having instructions therein which, when executed by a processor of an electronic device, enable the electronic device to perform the object detection method of any one of claims 1-4.
CN201910176719.3A 2019-03-08 2019-03-08 Target detection method and device Active CN110059547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910176719.3A CN110059547B (en) 2019-03-08 2019-03-08 Target detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910176719.3A CN110059547B (en) 2019-03-08 2019-03-08 Target detection method and device

Publications (2)

Publication Number Publication Date
CN110059547A CN110059547A (en) 2019-07-26
CN110059547B true CN110059547B (en) 2021-06-25

Family

ID=67316107

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910176719.3A Active CN110059547B (en) 2019-03-08 2019-03-08 Target detection method and device

Country Status (1)

Country Link
CN (1) CN110059547B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110532985B (en) * 2019-09-02 2022-07-22 北京迈格威科技有限公司 Target detection method, device and system
CN113297881A (en) * 2020-02-24 2021-08-24 华为技术有限公司 Target detection method and related device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks
CN107403141A (en) * 2017-07-05 2017-11-28 中国科学院自动化研究所 Method for detecting human face and device, computer-readable recording medium, equipment
WO2018029670A1 (en) * 2016-08-10 2018-02-15 Zeekit Online Shopping Ltd. System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
CN108629354A (en) * 2017-03-17 2018-10-09 杭州海康威视数字技术股份有限公司 Object detection method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018029670A1 (en) * 2016-08-10 2018-02-15 Zeekit Online Shopping Ltd. System, device, and method of virtual dressing utilizing image processing, machine learning, and computer vision
CN108629354A (en) * 2017-03-17 2018-10-09 杭州海康威视数字技术股份有限公司 Object detection method and device
CN107169421A (en) * 2017-04-20 2017-09-15 华南理工大学 A kind of car steering scene objects detection method based on depth convolutional neural networks
CN107403141A (en) * 2017-07-05 2017-11-28 中国科学院自动化研究所 Method for detecting human face and device, computer-readable recording medium, equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Anchor-based group detection in crowd scenes;Mulin Chen等;《IEEE》;20170619;第1378-1382页 *
面向真实场景的行人检测技术研究;李耀斌;《中国优秀硕士学位论文全文数据库信息科技辑》;20180915;第28-63页 *

Also Published As

Publication number Publication date
CN110059547A (en) 2019-07-26

Similar Documents

Publication Publication Date Title
US20170344192A1 (en) Method and device for playing live videos
EP3173970A1 (en) Image processing method and apparatus
CN107102772B (en) Touch control method and device
US10212386B2 (en) Method, device, terminal device, and storage medium for video effect processing
CN107202574B (en) Motion trail information correction method and device
CN109557999B (en) Bright screen control method and device and storage medium
CN107464253B (en) Eyebrow positioning method and device
CN107403144B (en) Mouth positioning method and device
CN107480785B (en) Convolutional neural network training method and device
CN108829475B (en) UI drawing method, device and storage medium
US10248855B2 (en) Method and apparatus for identifying gesture
CN111105454A (en) Method, device and medium for acquiring positioning information
CN106354504B (en) Message display method and device
CN110059547B (en) Target detection method and device
EP3322227A1 (en) Methods and apparatuses for controlling wireless connection, computer program and recording medium
CN106572268B (en) Information display method and device
CN108629814B (en) Camera adjusting method and device
CN108647074B (en) Method, device, hardware device and medium for displaying dynamic information in screen locking state
CN107682101B (en) Noise detection method and device and electronic equipment
CN107454204B (en) User information labeling method and device
CN105635573A (en) Pick-up head visual angle adjusting method and apparatus
CN106572431B (en) Equipment pairing method and device
WO2021103994A1 (en) Model training method and apparatus for information recommendation, electronic device and medium
CN105656639B (en) Group message display method and device
CN104850643B (en) Picture comparison method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant