CN109035336B - Image-based position detection method, device, equipment and storage medium - Google Patents
Image-based position detection method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109035336B CN109035336B CN201810719611.XA CN201810719611A CN109035336B CN 109035336 B CN109035336 B CN 109035336B CN 201810719611 A CN201810719611 A CN 201810719611A CN 109035336 B CN109035336 B CN 109035336B
- Authority
- CN
- China
- Prior art keywords
- target organism
- image
- determining
- shooting
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the application provides a position detection method, a position detection device, position detection equipment and a storage medium based on images, wherein the images obtained by shooting a plurality of shooting equipment erected in different directions at the same moment are acquired, and the time of the plurality of shooting equipment is synchronized; detecting key points of a target organism, a body and a head of the target organism in images shot by the plurality of shooting devices, and determining a first region position where the target organism is located in the images shot by the shooting devices based on the detection result; and determining the actual three-dimensional position of the target organism based on the first region position of the target organism in the image shot by each shooting device and the internal parameters and the external parameters of each shooting device. According to the embodiment of the application, the positioning accuracy of the two-dimensional position of the organism in the image and the three-dimensional position in the actual environment can be improved.
Description
Technical Field
The embodiment of the application relates to the technical field of artificial intelligence, in particular to a position detection method, device, equipment and storage medium based on an image.
Background
With the advance of social intelligence, unmanned supermarkets are widely concerned as a new retail mode. At present, the relevant technology of an unmanned supermarket is not mature, and particularly, how to judge the position of a customer through a plurality of cameras and continuously track the position is a difficult point.
The existing solution is mainly to obtain a rectangular frame of an image area where a human body is located by a human body key point detection method, and position and track the position of the human body by the rectangular frame, wherein the accuracy of the rectangular frame depends on the accuracy of the key point, and once the key point is missed or mistakenly detected, the rectangular frame, namely the human body, is inaccurate in positioning.
Disclosure of Invention
The embodiment of the application provides a position detection method, a position detection device, position detection equipment and a storage medium based on an image, which are used for improving the positioning accuracy of a two-dimensional position of an organism in the image and a three-dimensional position in an actual environment.
A first aspect of an embodiment of the present application provides a position detection method based on an image, including: acquiring images shot by a plurality of shooting devices erected in different directions at the same moment, wherein the plurality of shooting devices are synchronized in time; detecting key points of a target organism, a body and a head of the target organism in images shot by the plurality of shooting devices, and determining a first region position where the target organism is located in the images shot by the shooting devices based on the detection result; and determining the actual three-dimensional position of the target organism based on the first region position of the target organism in the image shot by each shooting device and the internal parameters and the external parameters of each shooting device.
A second aspect of the embodiments of the present application provides an image-based position detection apparatus, including: the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring images acquired by shooting a plurality of shooting devices erected in different directions at the same moment, and the time of the plurality of shooting devices is synchronous; the detection module is used for detecting key points of a target organism, a body and a head of the target organism in the images shot by the plurality of shooting devices and determining a first region position where the target organism is located in the images shot by the shooting devices based on the detection result; and the first determining module is used for determining the actual three-dimensional position of the target organism based on the first region position of the target organism in the image shot by each shooting device and the internal parameters and the external parameters of each shooting device.
A third aspect of embodiments of the present application provides a computer device, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method according to the first aspect as described above.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the method according to the first aspect.
Based on the above aspects, the embodiment of the present application obtains images captured at the same time by a plurality of capturing devices erected at different orientations, detects key points of a target organism, and a body and a head of the target organism in the images captured by the capturing devices, and determines a first region position where the target organism is located in the images captured by the capturing devices based on the detection result, thereby determining an actual three-dimensional position of the target organism based on the first region position of the target organism in the images captured by the capturing devices, and internal parameters and external parameters of the capturing devices. According to the method and the device for determining the position of the first region of the target organism in each image, the position of the first region of the target organism in each image is determined based on the detection result of the key points and the detection results of the body and the head, so that the problems of inaccurate positioning of the position of the first region caused by missing detection or false detection of the key points and false detection caused by small size of the head can be avoided, and the positioning accuracy of the two-dimensional position of the organism in the image and the three-dimensional position in the actual environment is improved.
It should be understood that what is described in the summary section above is not intended to limit key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present disclosure will become apparent from the following description.
Drawings
Fig. 1 is a flowchart of an image-based position detection method according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a method for determining a location of a first area according to an embodiment of the present application;
fig. 3 is a flowchart of a method for executing step S12 according to an embodiment of the present disclosure;
FIG. 4a is a schematic diagram of an image captured by a capturing device according to an embodiment of the present disclosure;
FIG. 4b is a schematic diagram of the results of a head and body detection provided by an embodiment of the present application;
FIG. 4c is a schematic diagram of the results of keypoint detection based on FIG. 4 b;
fig. 5 is a flowchart of an execution method of step S12 according to an embodiment of the present disclosure;
FIG. 6a is a schematic diagram of a distribution area of a target organism on an image, which is obtained based on a preset key point detection model;
FIG. 6b is a schematic diagram of a head region and a body region of a target organism detected based on a preset head and body detection model;
FIG. 6c is a schematic diagram of the region of the target organism determined based on FIGS. 6a and 6 b;
FIG. 7 is a schematic structural diagram of an image-based position detection apparatus according to an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a detection module 72 according to an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a detection module 72 according to an embodiment of the present disclosure.
Detailed Description
Embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present application. It should be understood that the drawings and embodiments of the present application are for illustration purposes only and are not intended to limit the scope of the present application.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the embodiments of the application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are, for example, capable of operation in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In new retail scenes such as unmanned supermarkets and the like, it is a technical difficulty how to judge the position of a customer through a plurality of cameras and continuously track the position. The customer needs to be related to the commodities taken by the customer in the whole shopping process, and the position and the motion track of the customer need to be continuously obtained. The current method for judging the position of the human body is mainly to obtain a rectangular frame representing the area where the human body is located in an image based on a human body key point detection technology, and position and track the position of the human body according to the rectangular frame. However, since the determination of the rectangular frame depends on the detection accuracy of the key points, when the key points are detected by mistake or missed, the rectangular frame is easily inaccurate, and the human body is not accurately positioned.
In view of the foregoing problems in the prior art, embodiments of the present application provide an image-based position detection method, which performs key point detection and body and head detection of a target organism in images captured by a plurality of capturing devices, so as to integrate the detection results of the key point detection and the body and head detection to determine the region location of the target organism in each image, and further determine the actual three-dimensional position of the target organism according to the region location of the target organism in each image and internal and external parameters of each capturing device. The problem of inaccurate positioning of the organism in the image caused by missing detection or false detection of key points and the problem of false detection caused by small head size are avoided, and the positioning accuracy of the organism in the two-dimensional position of the image and the three-dimensional position of the organism in the actual environment is improved.
Technical solutions of embodiments of the present application will be described in detail below with reference to the accompanying drawings.
Fig. 1 is a flowchart of an image-based position detection method according to an embodiment of the present disclosure, which may be performed by an image-based position detection apparatus (hereinafter, referred to as a position detection apparatus). Referring to FIG. 1, the method includes steps S11-S13:
and S11, acquiring images acquired by shooting at the same time by a plurality of shooting devices erected at different orientations, wherein the plurality of shooting devices are synchronized in time.
The multiple shooting devices in this embodiment may be aligned to the same calibration object, or aligned to different calibration objects, and the positions, orientations, and shooting angles of the shooting devices may be set as required. In addition, the plurality of shooting devices may perform time synchronization by reading network time, or may perform synchronization by receiving a synchronization signal sent by a specific apparatus, which is not specifically limited in this embodiment.
And S12, detecting key points of the target organism, the body and the head of the target organism in the images shot by the plurality of shooting devices, and determining the position of a first area where the target organism is located in the images shot by the shooting devices based on the detection result.
The target organism in this embodiment may be a human body or another organism.
In the present embodiment, the designation of "first region position" is used only to distinguish a region position of a living body in an image from other positions, and does not have other meanings. When determining the position of the target organism in the first region in each image, the method may detect the key point of the target organism in each image based on a preset first detection model, and detect the body and the head of the target organism based on a preset second model. Preferably, the first model and the second model may be both neural network models obtained by pre-training. The key points in the present embodiment may be any points on the living body, such as, but not limited to, points on the hand, arms, legs, and the like.
In addition, in this embodiment, the detection of the key point and the detection of the body and the head may be performed simultaneously, or may be performed based on a preset sequence, which is not specifically limited in this embodiment. For example, in one possible embodiment, the head and body detection may be performed in each image to obtain the head position and body position of the target living body in each image, the approximate region position of the entire target living body in each image may be determined based on the head position and body position of the target living body in each image, the key point detection may be performed in the approximate region position, and the first region position of the target living body in each image may be determined based on the detection results of the two. The method can not only avoid the problem of inaccurate positioning caused by missing detection or false detection of the key points, but also reduce the calculation amount of key point detection and improve the detection efficiency. In another possible mode, the detection of the key points and the detection of the body and the head can be simultaneously performed in each image, the position of the first region of the target organism in each image is comprehensively determined based on the distribution region of the key points on the target organism in each image and the regions of the body and the head of the target organism, and the interference of false detection or missing detection of the key points on the determination of the position of the first region is eliminated. For example, fig. 2 is a schematic diagram of a method for determining a position of a first region according to an embodiment of the present application, and as shown in fig. 2, in an image 20, a region 21 is a region where a head of a detected target organism is located, a region 22 is a region where a body of the detected target organism is located, and a region 23 is a distribution region of detected key points, so that a position of the first region where the organism is located is determined to be 24 according to the region 23, the region 21, and the region 22. It is of course only illustrative and not exclusive here.
And S13, determining the actual three-dimensional position of the target organism based on the first region position of the target organism in the image shot by each shooting device, and the internal parameters and the external parameters of each shooting device.
The internal parameters of the shooting device in this embodiment include, but are not limited to: focal length, field of view (FOV), resolution. The external parameters of the shooting device in the embodiment include, but are not limited to: coordinate position, orientation and pitch angle.
In this embodiment, when determining the actual three-dimensional position of the target biological body based on the first region position of the target biological body in the image captured by each capturing device, and the internal parameter and the external parameter of each capturing device, the selectable methods include a plurality of methods:
in one possible approach, the actual three-dimensional position of the target biological body may be determined based on the vertex positions of the regions of the target biological body on the respective images, and the internal and external parameters of the respective cameras.
In another possible mode, the three-dimensional position of each key point may be determined based on the position of the key point of the target living body on each image, and the internal parameter and the external parameter of each photographing apparatus, and the three-dimensional position of the living body in the actual space may be determined based on the three-dimensional position of each key point.
Of course, it should be understood by those skilled in the art that the above two modes are only two possible modes for clearly illustrating the embodiments, and not all the modes.
Further, since in a scene such as an unmanned supermarket, the moving track and behavior of the living body are often concerned besides the position of the living body, in order to meet the requirements of practical applications, after the three-dimensional position of the target living body at the current time is obtained, the moving track of the target living body may be generated based on the three-dimensional position of the target living body before the current time, so as to better analyze the behavior of the living body. Or the three-dimensional position of each key point can be determined based on the position of the key point of the target organism in each image and the internal parameters and the external parameters of each shooting device, and the posture of the organism can be determined based on the three-dimensional position of each key point. Thereby achieving the purpose of better analyzing the behavior of the organism.
The present embodiment obtains images captured at the same time by a plurality of capturing devices erected in different orientations, and detects key points of a target organism, and a body and a head of the target organism in the images captured by the capturing devices, and determines a first region position where the target organism is located in the images captured by the capturing devices based on the detection result, thereby determining an actual three-dimensional position of the target organism based on the first region position of the target organism in the images captured by the capturing devices, and internal parameters and external parameters of the capturing devices. Since the first region position of the target organism in each image is determined based on the detection result of the key points and the detection results of the body and the head, the problem of inaccurate positioning of the first region position caused by missing detection or false detection of the key points and the problem of false detection caused by small head size can be avoided, and the positioning accuracy of the two-dimensional position of the organism in the image and the three-dimensional position in the actual environment is improved.
The above embodiments are further optimized and expanded with reference to the attached drawings.
Fig. 3 is a flowchart of a method for executing step S12 according to an embodiment of the present application, and as shown in fig. 3, on the basis of the embodiment of fig. 1, the method includes steps S21-S23:
s21, detecting a body and a head of the target living body in the images captured by the plurality of imaging devices, and determining a second area position in which the entire target living body is located in each image based on an area position in which the body of the target living body is located and an area position in which the head of the target living body is located.
And S22, detecting the key point in the second area position of each image.
And S23, determining the distribution position of the key point of the target organism in the second area position of each image as the first area position of the target organism on each image.
By way of example, fig. 4a is a schematic diagram of an image captured by a capturing device provided in an embodiment of the present application, where the image 40 includes a target organism 41. First, the image 40 is detected based on a preset neural network model, an area 42 where the head of the target organism 41 is located and an area 43 where the body is located are obtained, and an area position (i.e., a second area position) 44 where the whole target organism 41 is located is obtained based on the area 42 and the area 43, and the detection result is shown in fig. 4 b. Further, keypoints are detected at the region positions 44 to obtain the keypoints shown in fig. 4c, and the positions where the keypoints are distributed in the region positions 44 are determined as the first region positions 45 where the target living body 41 is located.
The above examples are of course merely illustrative and not the only limitation of the invention.
In the embodiment, head detection and body detection are firstly carried out in an image shot by a shooting device, an approximate position of the whole target organism in the image is determined according to the head position and the body position of the target organism in the image, then key point detection is carried out in the approximate position, and the distribution position of the key points in the approximate position is taken as the first region position of the target organism in the image, so that the adverse effect of false detection or missed detection of the key points on organism positioning can be eliminated, the calculation amount of key point detection can be reduced, and the detection efficiency can be improved.
Fig. 5 is a flowchart of a method for executing step S12 according to an embodiment of the present application, and as shown in fig. 3, on the basis of the embodiment of fig. 1, the method includes steps S31-S33:
s31, detecting a key point of the target living body in the images captured by the plurality of imaging devices, and specifying a distribution area of the key point of the target living body in each image.
S32, detecting the body and the head of the target living body in the images captured by the plurality of imaging devices, and determining the position of the region in which the head and the body of the target living body are located in each image.
And S33, correcting the distribution area of the key points on the target organism on the image according to the area positions of the head and the body of the target organism in the image to obtain a first area position.
For example, it is assumed that fig. 6a is a schematic diagram of a distribution area of a target organism on an image, which is obtained based on a preset keypoint detection model, where the area 61 is a distribution area of keypoints. Fig. 6b is a schematic diagram of a head region and a body region of a target organism detected and obtained based on a preset head and body detection model, wherein the region 62 is a region of the head of the target organism in an image, and the region 63 is a region of the body of the target organism in the image. Then, based on fig. 6a and 6b, the interference of the false detection and the false detection key points is eliminated, and the region 64 of the target organism in the image as shown in fig. 6c can be obtained.
It is understood that this is by way of illustration and not by way of limitation.
The embodiment can eliminate the adverse effect of false detection or missing detection of the key points on the positioning of the organism, can also reduce the calculated amount of key point detection, and improves the detection efficiency.
Fig. 7 is a schematic structural diagram of an image-based position detection apparatus according to an embodiment of the present application, and as shown in fig. 7, the apparatus 70 includes:
an obtaining module 71, configured to obtain images obtained by shooting at the same time by multiple shooting devices erected in different orientations, where the multiple shooting devices are synchronized in time;
a detection module 72, configured to detect a key point of a target organism, a body and a head of the target organism in images captured by the plurality of capturing devices, and determine a first region position where the target organism is located in the images captured by the capturing devices based on a detection result;
a first determining module 73, configured to determine an actual three-dimensional position of the target biological object based on a first region position of the target biological object in the image captured by each capturing device, and internal parameters and external parameters of each capturing device.
In one possible design, the first determining module 73 includes:
and the second determining submodule is used for determining the actual three-dimensional position of the target organism based on the position of the key point of the target organism on the image shot by each shooting device and the internal parameter and the external parameter of each shooting device.
In yet another possible design, the apparatus further includes:
and the generating module is used for generating a movement track of the target organism based on the three-dimensional position of the target organism at the moment and the three-dimensional position of the target organism at each moment before the moment.
In yet another possible design, the apparatus further includes:
a second determining module, configured to determine, after detecting and obtaining the key points of the target organism from the images captured by the plurality of capturing devices, actual three-dimensional positions of the key points on the target organism based on positions of the key points of the target organism on the images captured by the capturing devices, and internal parameters and external parameters of the capturing devices;
and the third determining module is used for determining the posture of the target organism based on the actual three-dimensional position of each key point on the target organism.
The apparatus provided in this embodiment can be used to execute the method in the embodiment of fig. 1, and the execution manner and the beneficial effects are similar, which are not described herein again.
Fig. 8 is a schematic structural diagram of a detection module 72 according to an embodiment of the present application, and as shown in fig. 8, on the basis of the embodiment of fig. 7, the detection module 72 includes:
a first detection sub-module 721 that detects a body and a head of a target living body in images captured by the plurality of imaging devices, and determines a second area position in which the whole target living body is located in each image based on an area position in which the body of the target living body is located and an area position in which the head of the target living body is located;
a second detection submodule 722, configured to perform keypoint detection in the second region position of each image;
the first determining sub-module 723 is configured to determine a distribution position of the keypoints of the target living object in the second region position of each image as a first region position of the target living object on each image.
The apparatus provided in this embodiment can be used to execute the method in the embodiment of fig. 3, and the execution manner and the beneficial effects are similar, which are not described herein again.
Fig. 9 is a schematic structural diagram of a detection module 72 according to an embodiment of the present application, and as shown in fig. 9, on the basis of the embodiment of fig. 7, the detection module 72 includes:
a third detection submodule 724 configured to detect a key point of a target biological body in images captured by the plurality of capturing apparatuses, and determine a distribution area of the key point of the target biological body in each image;
a fourth detection sub-module 725 for detecting the body and the head of the target organism in the images captured by the plurality of capturing devices, determining the region positions where the head and the body of the target organism are located in each image,
the position correction submodule 726 is configured to correct, for each image, a distribution region of a keypoint on the target living object on the image based on a region position where a head and a body of the target living object in the image are located, so as to obtain a first region position.
The apparatus provided in this embodiment can be used to execute the method in the embodiment of fig. 5, and the execution manner and the beneficial effects are similar, which are not described herein again.
An embodiment of the present application further provides a computer device, including: one or more processors;
a storage device for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the method of any of the above embodiments.
The present application also provides a computer readable storage medium, on which a computer program is stored, and the computer program is executed by a processor to implement the method of any one of the above embodiments.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Claims (14)
1. An image-based position detection method, comprising:
acquiring images shot by a plurality of shooting devices erected in different directions at the same moment, wherein the plurality of shooting devices are synchronized in time;
detecting key points of a target organism, a body and a head of the target organism in images shot by the plurality of shooting devices, and determining a first region position where the target organism is located in the images shot by the shooting devices based on the detection result;
determining the actual three-dimensional position of the target organism based on the first region position of the target organism in the image shot by each shooting device and the internal parameters and the external parameters of each shooting device;
wherein the key point is any point on the target organism.
2. The method according to claim 1, wherein the detecting key points of a target organism and a body and a head of the target organism in the images captured by the plurality of capturing devices, and determining a first region position in which the target organism is located in the image captured by each capturing device based on the detection result comprises:
detecting a body and a head of a target organism in images shot by the plurality of shooting devices, and determining a second area position of the whole target organism in each image based on an area position of the body of the target organism and an area position of the head of the target organism;
performing keypoint detection in the second region position of each image;
and determining the distribution position of the key point of the target organism in the second region position of each image as the first region position of the target organism on each image.
3. The method according to claim 1, wherein the detecting key points of a target organism and a body and a head of the target organism in the images captured by the plurality of capturing devices, and determining a first region position in which the target organism is located in the image captured by each capturing device based on the detection result comprises:
detecting key points of a target organism in images shot by the plurality of shooting devices, and determining a distribution area of the key points of the target organism in each image;
detecting a body and a head of a target organism in images shot by the plurality of shooting devices, and determining the position of the area where the head and the body of the target organism are located in each image;
and correcting a distribution area of key points on the target organism on the image according to the area positions of the head and the body of the target organism in the image to obtain a first area position.
4. The method of claim 1, wherein determining the actual three-dimensional position of the target biological object based on the first region position of the target biological object in the image captured by each capturing device and the internal and external parameters of each capturing device comprises:
and determining the actual three-dimensional position of the target organism based on the positions of the key points of the target organism on the images shot by the shooting devices and the internal parameters and the external parameters of the shooting devices.
5. The method according to any one of claims 1-4, wherein after determining the actual three-dimensional position of the target biological object based on the first region position of the target biological object in the image captured by each capturing device, and the internal and external parameters of each capturing device, the method further comprises:
and generating a movement track of the target organism based on the three-dimensional position of the target organism at the time and the three-dimensional position of the target organism at each time before the time.
6. The method according to any one of claims 1-4, further comprising:
after key points of a target organism are detected and obtained from images shot by the plurality of shooting devices, determining the actual three-dimensional position of each key point on the target organism based on the position of the key point of the target organism on the images shot by each shooting device and internal parameters and external parameters of each shooting device;
and determining the posture of the target organism based on the actual three-dimensional positions of the key points on the target organism.
7. An image-based position detection apparatus, comprising:
the device comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring images acquired by shooting a plurality of shooting devices erected in different directions at the same moment, and the time of the plurality of shooting devices is synchronous;
the detection module is used for detecting key points of a target organism, a body and a head of the target organism in the images shot by the plurality of shooting devices and determining a first region position where the target organism is located in the images shot by the shooting devices based on the detection result;
the first determining module is used for determining the actual three-dimensional position of the target organism based on the first region position of the target organism in the image shot by each shooting device and the internal parameters and the external parameters of each shooting device;
wherein the key point is any point on the target organism.
8. The apparatus of claim 7, wherein the detection module comprises:
a first detection sub-module that detects a body and a head of a target organism in images captured by the plurality of imaging devices, and determines a second area position in which the whole target organism is located in each image based on an area position in which the body of the target organism is located and an area position in which the head of the target organism is located;
the second detection submodule is used for detecting key points in the second area position of each image;
and the first determining submodule is used for determining the distribution position of the key point of the target organism in the second area position of each image as the first area position of the target organism on each image.
9. The apparatus of claim 7, wherein the detection module comprises:
a third detection submodule for detecting a key point of a target organism in images captured by the plurality of capturing apparatuses, and determining a distribution area of the key point of the target organism in each image;
a fourth detection sub-module for detecting a body and a head of a target organism in images captured by the plurality of capturing devices, determining a position of a region in which the head and the body of the target organism are located in each image,
and the position correction submodule is used for correcting the distribution area of the key points on the target organism on the image according to the area positions of the head and the body of the target organism in the image so as to obtain a first area position.
10. The apparatus of claim 7, wherein the first determining module comprises:
and the second determining submodule is used for determining the actual three-dimensional position of the target organism based on the position of the key point of the target organism on the image shot by each shooting device and the internal parameter and the external parameter of each shooting device.
11. The apparatus according to any one of claims 7-10, further comprising:
and the generating module is used for generating a movement track of the target organism based on the three-dimensional position of the target organism at the moment and the three-dimensional position of the target organism at each moment before the moment.
12. The apparatus according to any one of claims 7-10, further comprising:
a second determining module, configured to determine, after detecting and obtaining the key points of the target organism from the images captured by the plurality of capturing devices, actual three-dimensional positions of the key points on the target organism based on positions of the key points of the target organism on the images captured by the capturing devices, and internal parameters and external parameters of the capturing devices;
and the third determining module is used for determining the posture of the target organism based on the actual three-dimensional position of each key point on the target organism.
13. A computer device, comprising:
one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method of any one of claims 1-6.
14. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810719611.XA CN109035336B (en) | 2018-07-03 | 2018-07-03 | Image-based position detection method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810719611.XA CN109035336B (en) | 2018-07-03 | 2018-07-03 | Image-based position detection method, device, equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109035336A CN109035336A (en) | 2018-12-18 |
CN109035336B true CN109035336B (en) | 2020-10-09 |
Family
ID=65521507
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810719611.XA Active CN109035336B (en) | 2018-07-03 | 2018-07-03 | Image-based position detection method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109035336B (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040014955A1 (en) * | 2001-12-17 | 2004-01-22 | Carlos Zamudio | Identification of essential genes of cryptococcus neoformans and methods of use |
CN107292269B (en) * | 2017-06-23 | 2020-02-28 | 中国科学院自动化研究所 | Face image false distinguishing method based on perspective distortion characteristic, storage and processing equipment |
CN108053469A (en) * | 2017-12-26 | 2018-05-18 | 清华大学 | Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera |
-
2018
- 2018-07-03 CN CN201810719611.XA patent/CN109035336B/en active Active
Non-Patent Citations (2)
Title |
---|
《Dual Deep Network for Visual Tracking》;Zhizhen Chi;《IEEE Transaction on Image Processing(TIP)》;20170430;全文 * |
《基于深度卷积神经网络的目标跟踪》;迟至真;《中国优秀硕士学位论文全文数据库(信息科技辑)》;20180615;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109035336A (en) | 2018-12-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108986164B (en) | Image-based position detection method, device, equipment and storage medium | |
US10059002B2 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable medium | |
US9589368B2 (en) | Object-tracking systems and methods | |
US8265425B2 (en) | Rectangular table detection using hybrid RGB and depth camera sensors | |
US10659670B2 (en) | Monitoring system and control method thereof | |
US9053546B2 (en) | Information processing apparatus, control method therefor, and computer-readable storage medium | |
CN110926330B (en) | Image processing apparatus, image processing method, and program | |
Führ et al. | Camera self-calibration based on nonlinear optimization and applications in surveillance systems | |
US20170345184A1 (en) | Three-dimensional information restoration device, three-dimensional information restoration system, and three-dimensional information restoration method | |
US20180005069A1 (en) | Information Processing Apparatus and Information Processing Method | |
US10623629B2 (en) | Imaging apparatus and imaging condition setting method and program | |
CN117896626B (en) | Method, device, equipment and storage medium for detecting motion trail by multiple cameras | |
CN110991292A (en) | Action identification comparison method and system, computer storage medium and electronic device | |
US11989928B2 (en) | Image processing system | |
CN109035336B (en) | Image-based position detection method, device, equipment and storage medium | |
JPWO2019016879A1 (en) | Object detection device and object detection method | |
JP2011095131A (en) | Image processing method | |
Szalóki et al. | Marker localization with a multi-camera system | |
EP4439459A1 (en) | Matching data between images for use in extrinsic calibration of multi-camera system | |
US20220398772A1 (en) | Object and feature detection in images | |
KR20220129728A (en) | Algorihm for keyframe extraction from video | |
CN116883964A (en) | Target object detection method, device, equipment and storage medium | |
CN114758016A (en) | Camera equipment calibration method, electronic equipment and storage medium | |
Pipitone et al. | Tripod operators for efficient search of point cloud data for known surface shapes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |