CN110427917A - Method and apparatus for detecting key point - Google Patents
Method and apparatus for detecting key point Download PDFInfo
- Publication number
- CN110427917A CN110427917A CN201910750139.0A CN201910750139A CN110427917A CN 110427917 A CN110427917 A CN 110427917A CN 201910750139 A CN201910750139 A CN 201910750139A CN 110427917 A CN110427917 A CN 110427917A
- Authority
- CN
- China
- Prior art keywords
- target
- detected
- point cloud
- dimensional
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
- G06T7/85—Stereo camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for detecting key point.One specific embodiment of this method includes: to obtain to be acquired obtained depth image and color image to target to be detected in multiple angles;Three-dimensional point cloud cluster is carried out to depth image, generates the three dimensional point cloud of target to be detected;Based on three dimensional point cloud and color image, the key point coordinate of target to be detected is generated.This embodiment improves critical point detection accuracy.
Description
Technical field
The invention relates to field of computer technology, and in particular to the method and apparatus for detecting key point.
Background technique
Top-down human body critical point detection technology is first to detected everyone by algorithm of target detection,
Then skeleton critical point detection is done for single people.The algorithm of the direction obtains relatively high effect on public data collection
Fruit, therefore also become the main flow direction of human body critical point detection technology.
Currently, common human body critical point detection mode is to acquire human body image by colour imagery shot, to color image
Pedestrian detection is carried out, human body critical point detection is done for the single people detected and extracts key point information.
Summary of the invention
The embodiment of the present application proposes the method and apparatus for detecting key point.
In a first aspect, the embodiment of the present application provides a kind of method for detecting key point, comprising: obtain at multiple angles
Degree is acquired obtained depth image and color image to target to be detected;Three-dimensional point cloud cluster is carried out to depth image,
Generate the three dimensional point cloud of target to be detected;Based on three dimensional point cloud and color image, the key of target to be detected is generated
Point coordinate.
In some embodiments, depth image and color image are that the depth camera of multiple angles acquires;And it is right
Depth image carries out three-dimensional point cloud cluster, generates the three dimensional point cloud of target to be detected, comprising: it is corresponding to obtain depth camera
Camera inside and outside calibrating parameters;Based on calibrating parameters inside and outside camera, depth image is projected in three-dimensional space, is obtained to be detected
The three dimensional point cloud of target.
In some embodiments, it is based on three dimensional point cloud and color image, generates the key point coordinate of target to be detected,
It include: to obtain two dimension in the plane where three dimensional point cloud back projection to color image based on calibrating parameters inside and outside camera
Point cloud data;Based on two-dimentional point cloud data, the detection block of the target to be detected in color image is determined;From detection block extract to
Detect the key point two-dimensional coordinate of target;Based on calibrating parameters inside and outside key point two-dimensional coordinate and camera, target to be detected is calculated
Key point three-dimensional coordinate.
In some embodiments, the key point two-dimensional coordinate of target to be detected is extracted from detection block, comprising: from cromogram
The corresponding image-region of detection block is cut out as in;Image-region is input to critical point detection model trained in advance, is obtained
The key point two-dimensional coordinate of target to be detected.
Second aspect, the embodiment of the present application provide a kind of for detecting the device of key point, comprising: image obtains single
Member is configured to acquisition in multiple angles and is acquired obtained depth image and color image to target to be detected;It is three-dimensional
Point cloud data generation unit is configured to carry out three-dimensional point cloud cluster to depth image, generates the three-dimensional point cloud of target to be detected
Data;Key point coordinate generating unit is configured to generate the pass of target to be detected based on three dimensional point cloud and color image
Key point coordinate.
In some embodiments, depth image and color image are that the depth camera of multiple angles acquires;And three
Tie up point cloud data generation unit, comprising: parameter obtains subelement, is configured to obtain external standard in the corresponding camera of depth camera
Determine parameter;Three dimensional point cloud generates subelement, is configured to that depth image is projected to three based on calibrating parameters inside and outside camera
In dimension space, the three dimensional point cloud of target to be detected is obtained.
In some embodiments, key point coordinate generating unit includes: that two-dimentional point cloud data generates subelement, is configured to
Two-dimensional points cloud is obtained by the plane where three dimensional point cloud back projection to color image based on calibrating parameters inside and outside camera
Data;Detection block determines subelement, is configured to determine the inspection of the target to be detected in color image based on two-dimentional point cloud data
Survey frame;Two-dimensional coordinate extracts subelement, is configured to extract the key point two-dimensional coordinate of target to be detected from detection block;It is three-dimensional
Coordinate computation subunit is configured to calculate target to be detected based on calibrating parameters inside and outside key point two-dimensional coordinate and camera
Key point three-dimensional coordinate.
In some embodiments, it includes: that image-region cuts out module that two-dimensional coordinate, which extracts subelement, is configured to from colour
The corresponding image-region of detection block is cut out in image;Two-dimensional coordinate generation module is configured to for image-region being input to pre-
First trained critical point detection model, obtains the key point two-dimensional coordinate of target to be detected.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, which includes: one or more processing
Device;Storage device is stored thereon with one or more programs;When one or more programs are executed by one or more processors,
So that one or more processors realize the method as described in implementation any in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable medium, are stored thereon with computer program, should
The method as described in implementation any in first aspect is realized when computer program is executed by processor.
Method and apparatus provided by the embodiments of the present application for detecting key point are obtained in multiple angles to be checked first
It surveys target and is acquired obtained depth image and color image;Then three-dimensional point cloud cluster is carried out to depth image, generated
The three dimensional point cloud of target to be detected;It is finally based on three dimensional point cloud and color image, generates the key of target to be detected
Point coordinate.Key point coordinate is extracted in conjunction with three dimensional point cloud and color image, improves critical point detection accuracy.Especially
The case where being at least partially obscured for target in color image can extract the key point coordinate for the part that is blocked, improve pass
Key point detection effect.
Detailed description of the invention
By reading a detailed description of non-restrictive embodiments in the light of the attached drawings below, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architectures therein;
Fig. 2 is the flow chart according to one embodiment of the method for detecting key point of the application;
Fig. 3 is the flow chart according to another embodiment of the method for detecting key point of the application;
Fig. 4 is the structural schematic diagram according to one embodiment of the device for detecting key point of the application;
Fig. 5 is adapted for the structural schematic diagram for the computer system for realizing the electronic equipment of the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, part relevant to related invention is illustrated only in attached drawing.
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for detecting key point or the device for detecting key point that can apply the application
The exemplary system architecture 100 of embodiment.
As shown in Figure 1, may include capture apparatus 101,102,103, network 104 and server in system architecture 100
105.Network 104 between capture apparatus 101,102,103 and server 105 to provide the medium of communication link.Network 104
It may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
Capture apparatus 101,102,103 can be interacted by network 104 with server 105, to receive or send message etc..
Capture apparatus 101,102,103 can be hardware, be also possible to software.When capture apparatus 101 is hardware, support figure can be
The various electronic equipments of picture or video capture function.Including but not limited to camera, camera and smart phone etc..Work as shooting
When equipment 101,102,103 is software, it may be mounted in above-mentioned electronic equipment.Multiple softwares or software mould may be implemented into it
Single software or software module also may be implemented into block.It is not specifically limited herein.
Server 105 can provide various services.Such as server 105 can be to the depth image and cromogram got
The data such as picture carry out the processing such as analyzing, and generate processing result (such as key point coordinate of target to be detected).
It should be noted that server 105 can be hardware, it is also possible to software.It, can when server 105 is hardware
To be implemented as the distributed server cluster that multiple servers form, individual server also may be implemented into.When server 105 is
When software, multiple softwares or software module (such as providing Distributed Services) may be implemented into, also may be implemented into single
Software or software module.It is not specifically limited herein.
It should be noted that for detecting the method for key point generally by server 105 provided by the embodiment of the present application
It executes, correspondingly, the device for detecting key point is generally positioned in server 105.
It should be understood that the number of capture apparatus, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of capture apparatus, network and server.
With continued reference to Fig. 2, it illustrates the streams according to one embodiment of the method for detecting key point of the application
Journey 200.The method for being used to detect key point, comprising the following steps:
Step 201, it obtains and obtained depth image and color image is acquired to target to be detected in multiple angles.
It in the present embodiment, can for detecting the executing subject (such as server 105 shown in FIG. 1) of the method for key point
To be obtained from the more capture apparatus (such as capture apparatus shown in FIG. 1 101,102,103) being arranged near target to be detected
It takes and obtained depth image and color image is acquired to target to be detected in multiple angles.Wherein, a capture apparatus
It is in a special angle with target to be detected, the depth map of target to be detected is acquired for the special angle in target to be detected
Picture and color image.For example, three capture apparatus are set at the front, side and the back side of target to be detected, this three shootings
Equipment acquires the depth image and color image of the front of target to be detected, side and the back side respectively.
Here, capture apparatus can be the various electronic equipments with shooting function, such as depth camera.Wherein, deep
Degree camera can be called RGB-D camera again, can be used for shooting RGB-D image.RGB-D image may include cromogram
As (RGB image) and depth image (Depth image).The pixel value of each pixel of color image can be captured mesh
Mark the color value of each point on surface.In general, all colours that human eyesight can perceive be by red (R), green (G),
The variation of blue (B) three Color Channels and their mutual superpositions obtain.Each pixel of depth image
Pixel value can be the distance between each point of depth camera Yu captured target surface.In general, color image and depth
Degree image is registration, thus has one-to-one corresponding relationship between color image and the pixel of depth image.
Step 202, three-dimensional point cloud cluster is carried out to depth image, generates the three dimensional point cloud of target to be detected.
In the present embodiment, above-mentioned executing subject can carry out three-dimensional point cloud cluster to depth image, to be detected to generate
The three dimensional point cloud of target.For example, the depth camera that above-mentioned executing subject can record the depth image of multiple angles
The distance between each point of captured target surface to be detected is converted to the coordinate points under the same coordinate system, generates to be checked
Survey the three dimensional point cloud of target.
Step 203, it is based on three dimensional point cloud and color image, generates the key point coordinate of target to be detected.
In the present embodiment, above-mentioned executing subject can three dimensional point cloud and color image based on target to be detected,
Generate the key point coordinate of target to be detected.For example, for the part that is not blocked of the target to be detected in color image, from coloured silk
Key point is directly extracted in chromatic graph picture;For the part that is blocked of the target to be detected in color image, in conjunction with target to be detected
Three dimensional point cloud, determine key point.
Method provided by the embodiments of the present application for detecting key point is obtained in multiple angles to target to be detected first
It is acquired obtained depth image and color image;Then three-dimensional point cloud cluster is carried out to depth image, generated to be detected
The three dimensional point cloud of target;It is finally based on three dimensional point cloud and color image, generates the key point coordinate of target to be detected.
Key point coordinate is extracted in conjunction with three dimensional point cloud and color image, improves critical point detection accuracy.Especially for coloured silk
The case where target is at least partially obscured in chromatic graph picture can extract the key point coordinate for the part that is blocked, and improve key point inspection
Survey effect.
With further reference to Fig. 3, it illustrates another embodiments according to the method for detecting key point of the application
Process 300.The method for being used to detect key point, comprising the following steps:
Step 301, it obtains and obtained depth image and color image is acquired to target to be detected in multiple angles.
In the present embodiment, the concrete operations of step 301 have carried out in step 201 in detail in the embodiment shown in Figure 2
Thin introduction, details are not described herein.
Step 302, calibrating parameters inside and outside the corresponding camera of depth camera are obtained.
In the present embodiment, calibrating parameters inside and outside the corresponding camera of the available depth camera of above-mentioned executing subject.
In general, calibrating parameters just have determined inside and outside camera after depth camera is provided with.The depth of different model
The camera intrinsic parameter for spending camera can be different, and camera intrinsic parameter is parameter relevant to depth camera self-characteristic.It is different
The Camera extrinsic number of the depth camera of position can be different.Wherein, camera intrinsic parameter can include but is not limited to: 1/dx, 1/
Dy, u0, v0, f etc..Dx and dy respectively indicates the length unit number that a pixel in the direction x and the direction y accounts for respectively, i.e., and one
The size for the actual physics value that pixel represents is the key that realize that image physical coordinates system and pixel coordinate system are converted.U0, v0
Indicate the horizontal and vertical pixel number differed between the center pixel coordinate and image origin pixel coordinate of image.F is that depth is taken the photograph
As the focal length of head.Camera extrinsic number can include but is not limited to ω, δ, θ, Tx, Ty, Tz etc..ω, δ, θ are the rotations of three axis
Parameter.Tx, Ty, Tz are the translation parameters of three axis.
Step 303, based on calibrating parameters inside and outside camera, depth image is projected in three-dimensional space, mesh to be detected is obtained
Target three dimensional point cloud.
In the present embodiment, depth image can be projected to three based on calibrating parameters inside and outside camera by above-mentioned executing subject
In dimension space, to obtain the three dimensional point cloud of target to be detected.
In general, depth image record is coordinate in camera coordinate system, above-mentioned executing subject can be by multiple camera shootings
Each coordinate of depth image record in machine coordinate system is converted to the coordinate in geocentric coordinate system, obtains the three of target to be detected
Tie up point cloud data.Wherein, in camera coordinate system, origin O is the optical center of video camera, X-axis and Y-axis and imaging plane coordinate system
Reference axis it is parallel, Z axis be video camera optical axis, with imaging plane perpendicular.In geocentric coordinate system, origin O is located at the earth
The mass centre of body, with orthogonal X, tri- axis of Y, Z indicate that X-axis is overlapped with first meridian plane with the intersection of the equatorial plane, to
East is positive.Z axis and earth rotation overlapping of axles, are northwards positive.Y-axis and XZ plane are vertically formed right-handed system.
Step 304, based on calibrating parameters inside and outside camera, by the plane where three dimensional point cloud back projection to color image
On, obtain two-dimentional point cloud data.
In the present embodiment, above-mentioned executing subject can be thrown three dimensional point cloud is counter based on calibrating parameters inside and outside camera
In plane where shadow to color image, to obtain two-dimentional point cloud data.
In general, above-mentioned executing subject can be converted to each coordinate of the three dimensional point cloud in geocentric coordinate system and coloured silk
Coordinate in the parallel two-dimensional coordinate system of plane of the chromatic graph as where, obtains two-dimentional point cloud data.In this way, in color image
Target to be detected the part that is not blocked, two-dimentional point cloud data is overlapped with it;For the target to be detected in color image
Be blocked part, and two-dimentional point cloud data shows on color image.
Step 305, based on two-dimentional point cloud data, the detection block of the target to be detected in color image is determined.
In the present embodiment, above-mentioned executing subject can determine cromogram based on the two-dimentional point cloud data of target to be detected
The detection block of target to be detected as in.It here, not only include to be detected in color image in the detection block of target to be detected
The part that is not blocked of target further includes the part that is blocked of the target to be detected in color image.
In general, above-mentioned executing subject can then exist the two-dimentional point cloud data and Color Image Fusion of target to be detected
Color image center makes the box of the two-dimentional point cloud data including target to be detected, the detection block as target to be detected.
Step 306, the key point two-dimensional coordinate of target to be detected is extracted from detection block.
In the present embodiment, the key point two dimension that above-mentioned executing subject can extract target to be detected from detection block is sat
Mark.In general, above-mentioned executing subject can analyze the target to be detected in detection block, the crucial position of target to be detected is determined
It sets, and using the coordinate of key position as the key point two-dimensional coordinate of target to be detected.
In some optional implementations of the present embodiment, above-mentioned executing subject can be cut out from color image first
The corresponding image-region of detection block out;Then image-region is input to critical point detection model trained in advance, obtained to be checked
Survey the key point two-dimensional coordinate of target.Wherein, critical point detection model can be used for extracting key point two-dimensional coordinate, be to utilize machine
Device learning method and training sample carry out obtained from Training existing machine learning model.In general, key point is examined
Surveying model can be ResNet model.
Step 307, based on calibrating parameters inside and outside key point two-dimensional coordinate and camera, the key point three of target to be detected is calculated
Tie up coordinate.
In the present embodiment, above-mentioned executing subject can be based on calibrating parameters inside and outside key point two-dimensional coordinate and camera, meter
Calculate the key point three-dimensional coordinate of target to be detected.
In general, each coordinate of the two-dimentional point cloud data in two-dimensional coordinate system can be converted to the earth's core by above-mentioned executing subject
Coordinate in coordinate system obtains the key point three-dimensional coordinate of target to be detected.
From figure 3, it can be seen that the side for being used to detect key point compared with the corresponding embodiment of Fig. 2, in the present embodiment
The step of process 300 of method highlights determining detection block.The scheme of the present embodiment description is by three dimensional point cloud back projection as a result,
Onto the plane where color image, two-dimentional point cloud data is obtained, according to two-dimensional points cloud data distribution accurate calculation mesh to be detected
The detection block being marked in color image, and the key point coordinate of target to be detected is extracted from detection block, even if in color image
Target is at least partially obscured, and can also be extracted the key point coordinate for the part that is blocked, be improved critical point detection effect.
With further reference to Fig. 4, as the realization to method shown in above-mentioned each figure, this application provides one kind to close for detecting
One embodiment of the device of key point, the Installation practice is corresponding with embodiment of the method shown in Fig. 2, which specifically can be with
Applied in various electronic equipments.
As shown in figure 4, the device 400 for detecting key point of the present embodiment may include: image acquisition unit 401,
Three dimensional point cloud generation unit 402 and key point coordinate generating unit 403.Wherein, image acquisition unit 401 are configured to
It obtains and obtained depth image and color image is acquired to target to be detected in multiple angles;Three dimensional point cloud generates
Unit 402 is configured to carry out three-dimensional point cloud cluster to depth image, generates the three dimensional point cloud of target to be detected;It is crucial
Point coordinate generating unit 403, the key point for being configured to be generated target to be detected based on three dimensional point cloud and color image are sat
Mark.
In the present embodiment, in the device 400 for detecting key point: image acquisition unit 401, three dimensional point cloud are raw
Fig. 2 pairs can be referred to respectively at the specific processing of unit 402 and key point coordinate generating unit 403 and its brought technical effect
The related description of the step 201-203 in embodiment is answered, details are not described herein.
In some optional implementations of the present embodiment, depth image and color image are that the depth of multiple angles is taken the photograph
As head acquires;And three dimensional point cloud generation unit 402, comprising: parameter obtains subelement (not shown), is configured
At calibrating parameters inside and outside the corresponding camera of acquisition depth camera;Three dimensional point cloud generates subelement (not shown), quilt
It is configured to project in three-dimensional space depth image based on calibrating parameters inside and outside camera, obtain the three-dimensional point of target to be detected
Cloud data.
In some optional implementations of the present embodiment, key point coordinate generating unit 403 includes: two-dimensional points cloud number
According to subelement (not shown) is generated, it is configured to arrive three dimensional point cloud back projection based on calibrating parameters inside and outside camera
In plane where color image, two-dimentional point cloud data is obtained;Detection block determines subelement (not shown), is configured to base
In two-dimentional point cloud data, the detection block of the target to be detected in color image is determined;Two-dimensional coordinate extracts subelement and (does not show in figure
Out), it is configured to extract the key point two-dimensional coordinate of target to be detected from detection block;Three-dimensional coordinate computation subunit is (in figure
It is not shown), it is configured to calculate the key point three of target to be detected based on calibrating parameters inside and outside key point two-dimensional coordinate and camera
Tie up coordinate.
In some optional implementations of the present embodiment, it includes: that image-region is cut out that two-dimensional coordinate, which extracts subelement,
Module (not shown) is configured to be cut out the corresponding image-region of detection block from color image;Two-dimensional coordinate generates
Module (not shown) is configured to for image-region being input to critical point detection model trained in advance, obtains to be detected
The key point two-dimensional coordinate of target.
Below with reference to Fig. 5, it is (such as shown in FIG. 1 that it illustrates the electronic equipments for being suitable for being used to realize the embodiment of the present application
Server 105) computer system 500 structural schematic diagram.Electronic equipment shown in Fig. 5 is only an example, should not be right
The function and use scope of the embodiment of the present application bring any restrictions.
As shown in figure 5, computer system 500 includes central processing unit (CPU) 501, it can be read-only according to being stored in
Program in memory (ROM) 502 or be loaded into the program in random access storage device (RAM) 503 from storage section 508 and
Execute various movements appropriate and processing.In RAM 503, also it is stored with system 500 and operates required various programs and data.
CPU 501, ROM 502 and RAM 503 are connected with each other by bus 504.Input/output (I/O) interface 505 is also connected to always
Line 504.
I/O interface 505 is connected to lower component: the importation 506 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 507 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 508 including hard disk etc.;
And the communications portion 509 of the network interface card including LAN card, modem etc..Communications portion 509 via such as because
The network of spy's net executes communication process.Driver 510 is also connected to I/O interface 505 as needed.Detachable media 511, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 510, in order to read from thereon
Computer program be mounted into storage section 508 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 509, and/or from detachable media
511 are mounted.When the computer program is executed by central processing unit (CPU) 501, limited in execution the present processes
Above-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In application, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
The calculating of the operation for executing the application can be write with one or more programming languages or combinations thereof
Machine program code, described program design language include object-oriented programming language-such as Java, Smalltalk, C+
+, further include conventional procedural programming language-such as " C " language or similar programming language.Program code can
Fully to execute, partly execute on the user computer on the user computer, be executed as an independent software package,
Part executes on the remote computer or holds on remote computer or electronic equipment completely on the user computer for part
Row.In situations involving remote computers, remote computer can pass through the network of any kind --- including local area network
(LAN) or wide area network (WAN)-is connected to subscriber computer, or, it may be connected to outer computer (such as utilize internet
Service provider is connected by internet).
Flow chart and block diagram in attached drawing are illustrated according to the system of the various embodiments of the application, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of the module, program segment or code include one or more use
The executable instruction of the logic function as defined in realizing.It should also be noted that in some implementations as replacements, being marked in box
The function of note can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are actually
It can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it to infuse
Meaning, the combination of each box in block diagram and or flow chart and the box in block diagram and or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized, or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit also can be set in the processor, for example, can be described as: a kind of processor packet
Include image acquisition unit, three dimensional point cloud generation unit and key point coordinate generating unit.Wherein, the title of these units exists
The restriction to the unit itself is not constituted in the case of kind, for example, image acquisition unit is also described as " obtaining multiple
Angle is acquired the unit of obtained depth image and color image to target to be detected ".
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment
When row, so that the electronic equipment: obtaining and be acquired obtained depth image and colour to target to be detected in multiple angles
Image;Three-dimensional point cloud cluster is carried out to depth image, generates the three dimensional point cloud of target to be detected;Based on three dimensional point cloud
And color image, generate the key point coordinate of target to be detected.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.Those skilled in the art
Member is it should be appreciated that invention scope involved in the application, however it is not limited to technology made of the specific combination of above-mentioned technical characteristic
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
Any combination and the other technical solutions formed.Such as features described above has similar function with (but being not limited to) disclosed herein
Can technical characteristic replaced mutually and the technical solution that is formed.
Claims (10)
1. a kind of method for detecting key point, comprising:
It obtains and obtained depth image and color image is acquired to target to be detected in multiple angles;
Three-dimensional point cloud cluster is carried out to the depth image, generates the three dimensional point cloud of the target to be detected;
Based on the three dimensional point cloud and the color image, the key point coordinate of the target to be detected is generated.
2. according to the method described in claim 1, wherein, the depth image and the color image are the multiple angles
Depth camera acquisition;And
It is described that three-dimensional point cloud cluster is carried out to the depth image, generate the three dimensional point cloud of the target to be detected, comprising:
Obtain calibrating parameters inside and outside the corresponding camera of the depth camera;
Based on calibrating parameters inside and outside the camera, the depth image is projected in three-dimensional space, obtains the mesh to be detected
Target three dimensional point cloud.
It is described to be based on the three dimensional point cloud and the color image 3. according to the method described in claim 2, wherein, it is raw
At the key point coordinate of the target to be detected, comprising:
Based on calibrating parameters inside and outside the camera, by the plane where the three dimensional point cloud back projection to the color image
On, obtain two-dimentional point cloud data;
Based on the two-dimentional point cloud data, the detection block of the target to be detected in the color image is determined;
The key point two-dimensional coordinate of the target to be detected is extracted from the detection block;
Based on calibrating parameters inside and outside the key point two-dimensional coordinate and the camera, the key point three of the target to be detected is calculated
Tie up coordinate.
4. according to the method described in claim 3, wherein, the key that the target to be detected is extracted from the detection block
Point two-dimensional coordinate, comprising:
The corresponding image-region of the detection block is cut out from the color image;
Described image region is input to critical point detection model trained in advance, obtains the key point two of the target to be detected
Tie up coordinate.
5. a kind of for detecting the device of key point, comprising:
Image acquisition unit, be configured to obtain multiple angles to target to be detected be acquired obtained depth image and
Color image;
Three dimensional point cloud generation unit is configured to carry out three-dimensional point cloud cluster to the depth image, generate described to be checked
Survey the three dimensional point cloud of target;
Key point coordinate generating unit, is configured to based on the three dimensional point cloud and the color image, generate it is described to
Detect the key point coordinate of target.
6. device according to claim 5, wherein the depth image and the color image are the multiple angles
Depth camera acquisition;And
The three dimensional point cloud generation unit, comprising:
Parameter obtains subelement, is configured to obtain calibrating parameters inside and outside the corresponding camera of the depth camera;
Three dimensional point cloud generates subelement, is configured to throw the depth image based on calibrating parameters inside and outside the camera
In shadow to three-dimensional space, the three dimensional point cloud of the target to be detected is obtained.
7. device according to claim 6, wherein the key point coordinate generating unit includes:
Two-dimentional point cloud data generates subelement, is configured to based on calibrating parameters inside and outside the camera, by the three-dimensional point cloud number
According in the plane where back projection to the color image, two-dimentional point cloud data is obtained;
Detection block determines subelement, is configured to based on the two-dimentional point cloud data, determine in the color image it is described to
Detect the detection block of target;
Two-dimensional coordinate extracts subelement, and the key point two dimension for being configured to extract the target to be detected from the detection block is sat
Mark;
Three-dimensional coordinate computation subunit is configured to based on calibrating parameters inside and outside the key point two-dimensional coordinate and the camera,
Calculate the key point three-dimensional coordinate of the target to be detected.
8. device according to claim 7, wherein the two-dimensional coordinate extracts subelement and includes:
Image-region cuts out module, is configured to be cut out the corresponding image-region of the detection block from the color image;
Two-dimensional coordinate generation module is configured to for being input in described image region critical point detection model trained in advance, obtains
To the key point two-dimensional coordinate of the target to be detected.
9. a kind of electronic equipment, comprising:
One or more processors;
Storage device is stored thereon with one or more programs,
When one or more of programs are executed by one or more of processors, so that one or more of processors are real
The now method as described in any in claim 1-4.
10. a kind of computer-readable medium, is stored thereon with computer program, wherein the computer program is held by processor
The method as described in any in claim 1-4 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910750139.0A CN110427917B (en) | 2019-08-14 | 2019-08-14 | Method and device for detecting key points |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910750139.0A CN110427917B (en) | 2019-08-14 | 2019-08-14 | Method and device for detecting key points |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110427917A true CN110427917A (en) | 2019-11-08 |
CN110427917B CN110427917B (en) | 2022-03-22 |
Family
ID=68414702
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910750139.0A Active CN110427917B (en) | 2019-08-14 | 2019-08-14 | Method and device for detecting key points |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110427917B (en) |
Cited By (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111028283A (en) * | 2019-12-11 | 2020-04-17 | 北京迈格威科技有限公司 | Image detection method, device, equipment and readable storage medium |
CN111079597A (en) * | 2019-12-05 | 2020-04-28 | 联想(北京)有限公司 | Three-dimensional information detection method and electronic equipment |
CN111179328A (en) * | 2019-12-31 | 2020-05-19 | 智车优行科技(上海)有限公司 | Data synchronization calibration method and device, readable storage medium and electronic equipment |
CN111199198A (en) * | 2019-12-27 | 2020-05-26 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
CN111222401A (en) * | 2019-11-14 | 2020-06-02 | 北京华捷艾米科技有限公司 | Method and device for identifying three-dimensional coordinates of hand key points |
CN111339880A (en) * | 2020-02-19 | 2020-06-26 | 北京市商汤科技开发有限公司 | Target detection method and device, electronic equipment and storage medium |
CN111523468A (en) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | Human body key point identification method and device |
CN111582240A (en) * | 2020-05-29 | 2020-08-25 | 上海依图网络科技有限公司 | Object quantity identification method, device, equipment and medium |
CN111723688A (en) * | 2020-06-02 | 2020-09-29 | 北京的卢深视科技有限公司 | Human body action recognition result evaluation method and device and electronic equipment |
CN111797745A (en) * | 2020-06-28 | 2020-10-20 | 北京百度网讯科技有限公司 | Training and predicting method, device, equipment and medium of object detection model |
CN112053427A (en) * | 2020-10-15 | 2020-12-08 | 珠海格力智能装备有限公司 | Point cloud feature extraction method, device, equipment and readable storage medium |
CN112489126A (en) * | 2020-12-10 | 2021-03-12 | 浙江商汤科技开发有限公司 | Vehicle key point information detection method, vehicle control method and device and vehicle |
CN112668460A (en) * | 2020-12-25 | 2021-04-16 | 北京百度网讯科技有限公司 | Target detection method, electronic equipment, road side equipment and cloud control platform |
CN113312947A (en) * | 2020-02-27 | 2021-08-27 | 北京沃东天骏信息技术有限公司 | Method and device for determining behavior object |
CN113344917A (en) * | 2021-07-28 | 2021-09-03 | 浙江华睿科技股份有限公司 | Detection method, detection device, electronic equipment and storage medium |
CN113763465A (en) * | 2020-06-02 | 2021-12-07 | 中移(成都)信息通信科技有限公司 | Garbage determination system, model training method, determination method and determination device |
WO2022011560A1 (en) * | 2020-07-14 | 2022-01-20 | Oppo广东移动通信有限公司 | Image cropping method and apparatus, electronic device, and storage medium |
US20230033177A1 (en) * | 2021-07-30 | 2023-02-02 | Zoox, Inc. | Three-dimensional point clouds based on images and depth data |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102692236A (en) * | 2012-05-16 | 2012-09-26 | 浙江大学 | Visual milemeter method based on RGB-D camera |
US20130163879A1 (en) * | 2010-08-30 | 2013-06-27 | Bk-Imaging Ltd. | Method and system for extracting three-dimensional information |
US20140285485A1 (en) * | 2013-03-25 | 2014-09-25 | Superd Co. Ltd. | Two-dimensional (2d)/three-dimensional (3d) image processing method and system |
CN106709947A (en) * | 2016-12-20 | 2017-05-24 | 西安交通大学 | RGBD camera-based three-dimensional human body rapid modeling system |
CN106778628A (en) * | 2016-12-21 | 2017-05-31 | 张维忠 | A kind of facial expression method for catching based on TOF depth cameras |
CN106909875A (en) * | 2016-09-12 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | Face shape of face sorting technique and system |
CN107368778A (en) * | 2017-06-02 | 2017-11-21 | 深圳奥比中光科技有限公司 | Method for catching, device and the storage device of human face expression |
US20180068178A1 (en) * | 2016-09-05 | 2018-03-08 | Max-Planck-Gesellschaft Zur Förderung D. Wissenschaften E.V. | Real-time Expression Transfer for Facial Reenactment |
CN107852533A (en) * | 2015-07-14 | 2018-03-27 | 三星电子株式会社 | Three-dimensional content generating means and its three-dimensional content generation method |
CN108510530A (en) * | 2017-02-28 | 2018-09-07 | 深圳市朗驰欣创科技股份有限公司 | A kind of three-dimensional point cloud matching process and its system |
CN108830150A (en) * | 2018-05-07 | 2018-11-16 | 山东师范大学 | One kind being based on 3 D human body Attitude estimation method and device |
CN108876881A (en) * | 2018-06-04 | 2018-11-23 | 浙江大学 | Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect |
-
2019
- 2019-08-14 CN CN201910750139.0A patent/CN110427917B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130163879A1 (en) * | 2010-08-30 | 2013-06-27 | Bk-Imaging Ltd. | Method and system for extracting three-dimensional information |
CN102692236A (en) * | 2012-05-16 | 2012-09-26 | 浙江大学 | Visual milemeter method based on RGB-D camera |
US20140285485A1 (en) * | 2013-03-25 | 2014-09-25 | Superd Co. Ltd. | Two-dimensional (2d)/three-dimensional (3d) image processing method and system |
CN107852533A (en) * | 2015-07-14 | 2018-03-27 | 三星电子株式会社 | Three-dimensional content generating means and its three-dimensional content generation method |
US20180068178A1 (en) * | 2016-09-05 | 2018-03-08 | Max-Planck-Gesellschaft Zur Förderung D. Wissenschaften E.V. | Real-time Expression Transfer for Facial Reenactment |
CN106909875A (en) * | 2016-09-12 | 2017-06-30 | 湖南拓视觉信息技术有限公司 | Face shape of face sorting technique and system |
CN106709947A (en) * | 2016-12-20 | 2017-05-24 | 西安交通大学 | RGBD camera-based three-dimensional human body rapid modeling system |
CN106778628A (en) * | 2016-12-21 | 2017-05-31 | 张维忠 | A kind of facial expression method for catching based on TOF depth cameras |
CN108510530A (en) * | 2017-02-28 | 2018-09-07 | 深圳市朗驰欣创科技股份有限公司 | A kind of three-dimensional point cloud matching process and its system |
CN107368778A (en) * | 2017-06-02 | 2017-11-21 | 深圳奥比中光科技有限公司 | Method for catching, device and the storage device of human face expression |
CN108830150A (en) * | 2018-05-07 | 2018-11-16 | 山东师范大学 | One kind being based on 3 D human body Attitude estimation method and device |
CN108876881A (en) * | 2018-06-04 | 2018-11-23 | 浙江大学 | Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect |
Non-Patent Citations (2)
Title |
---|
S SALTI,ET.AL: "A performance evaluation of 3D keypoint detectors", 《IN 2011 INTERNATIONAL CONFERENCE ON 3D IMAGING,MODELING,PROCESSING,VISUALIZATION AND TRANSMISSION》 * |
张凯霖: "基于点云配准的3D物体检测与定位", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111222401A (en) * | 2019-11-14 | 2020-06-02 | 北京华捷艾米科技有限公司 | Method and device for identifying three-dimensional coordinates of hand key points |
CN111222401B (en) * | 2019-11-14 | 2023-08-22 | 北京华捷艾米科技有限公司 | Method and device for identifying three-dimensional coordinates of hand key points |
CN111079597A (en) * | 2019-12-05 | 2020-04-28 | 联想(北京)有限公司 | Three-dimensional information detection method and electronic equipment |
CN111028283A (en) * | 2019-12-11 | 2020-04-17 | 北京迈格威科技有限公司 | Image detection method, device, equipment and readable storage medium |
CN111028283B (en) * | 2019-12-11 | 2024-01-12 | 北京迈格威科技有限公司 | Image detection method, device, equipment and readable storage medium |
CN111199198A (en) * | 2019-12-27 | 2020-05-26 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
CN111199198B (en) * | 2019-12-27 | 2023-08-04 | 深圳市优必选科技股份有限公司 | Image target positioning method, image target positioning device and mobile robot |
CN111179328A (en) * | 2019-12-31 | 2020-05-19 | 智车优行科技(上海)有限公司 | Data synchronization calibration method and device, readable storage medium and electronic equipment |
CN111179328B (en) * | 2019-12-31 | 2023-09-08 | 智车优行科技(上海)有限公司 | Data synchronous calibration method and device, readable storage medium and electronic equipment |
CN111339880A (en) * | 2020-02-19 | 2020-06-26 | 北京市商汤科技开发有限公司 | Target detection method and device, electronic equipment and storage medium |
CN113312947A (en) * | 2020-02-27 | 2021-08-27 | 北京沃东天骏信息技术有限公司 | Method and device for determining behavior object |
CN111523468B (en) * | 2020-04-23 | 2023-08-08 | 北京百度网讯科技有限公司 | Human body key point identification method and device |
CN111523468A (en) * | 2020-04-23 | 2020-08-11 | 北京百度网讯科技有限公司 | Human body key point identification method and device |
CN111582240B (en) * | 2020-05-29 | 2023-08-08 | 上海依图网络科技有限公司 | Method, device, equipment and medium for identifying number of objects |
CN111582240A (en) * | 2020-05-29 | 2020-08-25 | 上海依图网络科技有限公司 | Object quantity identification method, device, equipment and medium |
CN111723688A (en) * | 2020-06-02 | 2020-09-29 | 北京的卢深视科技有限公司 | Human body action recognition result evaluation method and device and electronic equipment |
CN113763465A (en) * | 2020-06-02 | 2021-12-07 | 中移(成都)信息通信科技有限公司 | Garbage determination system, model training method, determination method and determination device |
CN111797745A (en) * | 2020-06-28 | 2020-10-20 | 北京百度网讯科技有限公司 | Training and predicting method, device, equipment and medium of object detection model |
WO2022011560A1 (en) * | 2020-07-14 | 2022-01-20 | Oppo广东移动通信有限公司 | Image cropping method and apparatus, electronic device, and storage medium |
CN112053427A (en) * | 2020-10-15 | 2020-12-08 | 珠海格力智能装备有限公司 | Point cloud feature extraction method, device, equipment and readable storage medium |
CN112489126B (en) * | 2020-12-10 | 2023-09-19 | 浙江商汤科技开发有限公司 | Vehicle key point information detection method, vehicle control method and device and vehicle |
CN112489126A (en) * | 2020-12-10 | 2021-03-12 | 浙江商汤科技开发有限公司 | Vehicle key point information detection method, vehicle control method and device and vehicle |
CN112668460A (en) * | 2020-12-25 | 2021-04-16 | 北京百度网讯科技有限公司 | Target detection method, electronic equipment, road side equipment and cloud control platform |
US11721042B2 (en) | 2020-12-25 | 2023-08-08 | Apollo Intelligent Connectivity (Beijing) Technology Co., Ltd. | Target detection method, electronic device and medium |
CN113344917B (en) * | 2021-07-28 | 2021-11-23 | 浙江华睿科技股份有限公司 | Detection method, detection device, electronic equipment and storage medium |
CN113344917A (en) * | 2021-07-28 | 2021-09-03 | 浙江华睿科技股份有限公司 | Detection method, detection device, electronic equipment and storage medium |
US20230033177A1 (en) * | 2021-07-30 | 2023-02-02 | Zoox, Inc. | Three-dimensional point clouds based on images and depth data |
Also Published As
Publication number | Publication date |
---|---|
CN110427917B (en) | 2022-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427917A (en) | Method and apparatus for detecting key point | |
CN108154196B (en) | Method and apparatus for exporting image | |
US9129435B2 (en) | Method for creating 3-D models by stitching multiple partial 3-D models | |
CN103582893B (en) | The two dimensional image represented for augmented reality is obtained | |
US20180276241A1 (en) | System and method for telecom inventory management | |
CN110400363A (en) | Map constructing method and device based on laser point cloud | |
CN108694882A (en) | Method, apparatus and equipment for marking map | |
WO2012176945A1 (en) | Apparatus for synthesizing three-dimensional images to visualize surroundings of vehicle and method thereof | |
CN107516294A (en) | The method and apparatus of stitching image | |
CN109285188A (en) | Method and apparatus for generating the location information of target object | |
JP2017021328A (en) | Method and system of determining space characteristic of camera | |
CN110866977B (en) | Augmented reality processing method, device, system, storage medium and electronic equipment | |
CN109961501A (en) | Method and apparatus for establishing three-dimensional stereo model | |
CN110472460A (en) | Face image processing process and device | |
WO2020253716A1 (en) | Image generation method and device | |
KR20120076175A (en) | 3d street view system using identification information | |
US20180020203A1 (en) | Information processing apparatus, method for panoramic image display, and non-transitory computer-readable storage medium | |
CN111080704B (en) | Video augmented reality method and device | |
CN109978753A (en) | The method and apparatus for drawing panorama thermodynamic chart | |
CN110231832A (en) | Barrier-avoiding method and obstacle avoidance apparatus for unmanned plane | |
CN108597034A (en) | Method and apparatus for generating information | |
CN114898044A (en) | Method, apparatus, device and medium for imaging detection object | |
CN109034214B (en) | Method and apparatus for generating a mark | |
CN110287161A (en) | Image processing method and device | |
CN110110696A (en) | Method and apparatus for handling information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |