WO2022246605A1 - Procédé et appareil d'étalonnage de points-clés - Google Patents
Procédé et appareil d'étalonnage de points-clés Download PDFInfo
- Publication number
- WO2022246605A1 WO2022246605A1 PCT/CN2021/095539 CN2021095539W WO2022246605A1 WO 2022246605 A1 WO2022246605 A1 WO 2022246605A1 CN 2021095539 W CN2021095539 W CN 2021095539W WO 2022246605 A1 WO2022246605 A1 WO 2022246605A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- key point
- image
- images
- key
- coordinate system
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 86
- 238000012549 training Methods 0.000 claims description 60
- 238000012545 processing Methods 0.000 claims description 37
- 238000004590 computer program Methods 0.000 claims description 9
- 238000013473 artificial intelligence Methods 0.000 abstract 1
- 238000013527 convolutional neural network Methods 0.000 description 20
- 230000003287 optical effect Effects 0.000 description 13
- 238000013528 artificial neural network Methods 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 10
- 238000010586 diagram Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 230000001815 facial effect Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000003709 image segmentation Methods 0.000 description 3
- 230000000306 recurrent effect Effects 0.000 description 3
- 238000003491 array Methods 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000008054 signal transmission Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present application relates to the field of automatic driving, in particular to a key point calibration method and device.
- Identifying key points in images is fundamental to computing devices performing vision tasks. For example, in the process of face recognition or gesture recognition, it is necessary to first determine the position of the key points of the face or fingers, and then recognize the current face or gesture through a series of algorithms on this basis.
- the model used to recognize the current face or gesture needs to be trained using key point data. The larger the amount of key point data, the stronger the recognition ability of the trained model.
- the existing key point data is obtained through manual calibration of pictures, which has the following disadvantages: the calibration speed is slow, and each person can only calibrate about 100-200 pictures per day; different calibration personnel have inconsistent understanding of the calibration rules, and two different calibration
- the calibration of the key points of the same picture by personnel may be different, and sometimes, even the position of the key points of the same picture marked by the same calibration personnel twice before and after will be different; when the rotation angle of the face picture relative to the camera is too large When a large part of the face is blocked, the calibration personnel can only guess where the key points of the blocked part are, and the accuracy of the calibration can no longer be guaranteed; manual calibration can only calibrate the two-dimensional coordinates of the key points in the picture, and cannot calibrate the key points. point depth.
- the present application provides a key point calibration method and device to realize automatic key point calibration, reduce the consumption of human resources; ensure the accuracy of key point calibration, so that the calibration results can reach the commercial landing level.
- the calibration method provided by the present application can be executed by a local terminal, such as a terminal such as a computer, or by a processor; it can also be executed by a server, where the processor can be a central processing unit (central processing unit, CPU), image Processor (Graphic Processing Unit, GPU), or general-purpose processor, etc.
- the general-purpose processor can be a microprocessor or the processor can also be any conventional processor, etc.
- the server can be a cloud server, a local server, a physical server, or a virtual server. Do limited.
- the image acquisition device (such as a mobile phone, a terminal with a camera) sends the image to the local terminal, and the local terminal calibrates the key points of the image after receiving the image, and stores the information of the key points obtained after calibration to the local memory or
- the image acquisition device (such as a mobile phone, a terminal with a camera) sends the image to the cloud server, and after the server receives the image, it calibrates the key points of the image, and stores the key point information obtained after calibration in the On the cloud storage, or the information of the key points obtained after calibration (the coordinates of the key points in the image, the depth of the key points, etc.) is transmitted back to the local terminal (such as a computer, mobile phone, camera) or back to the local memory.
- the local terminal such as a computer, mobile phone, camera
- the first aspect of the present application provides a key point calibration method, including: acquiring multiple captured images and the parameters of the capture devices corresponding to the multiple captured images, where the target objects in the multiple captured images have the same posture and are captured at an angle
- the multiple captured images include the first image and other images, wherein the captured angle of the target object in the first image is smaller than a preset threshold, and the first image includes at least two images; according to the key points of the target object in the first
- the position in the image and the parameters of the acquisition device corresponding to the first image determine the position of the key point in the world coordinate system; and determine the key point according to the parameters of the acquisition device corresponding to other images and the position of the key point in the world coordinate system The position of the point in the other image.
- the position of the key point in the world coordinate system is accurate, the position of the key point in other images can be accurately located, which solves the problem that the key point cannot be accurately calibrated due to the large acquisition angle of the target object , to improve the accuracy of key point calibration in images under various captured angles;
- the automatic calibration of key points is realized, without manual calibration of key points, the efficiency of key point calibration is improved, and human resources are reduced.
- the multiple acquired images are images with standardized sizes.
- the sizes of the target objects in the collected images are unified, thereby improving the accuracy of the positions of the key points in the first image.
- determining the position of the key point in the world coordinate system includes: according to the position of the key point in the The positions in the at least two first images and the parameters of the image acquisition device corresponding to the first images are used to obtain the positions of the key points in the world coordinate system by triangulation.
- it also includes: calibrating the position of the key point in the world coordinate system, so that the position of the key point in the world coordinate system is located in the key area of the target object; according to the calibrated key point in the world coordinate system The position below and the parameters of the acquisition device corresponding to the multiple captured images are updated to update the positions of the key points in the multiple captured images.
- the position of the key point in the world coordinate system can be obtained even if the position of the key point calibration model is inaccurately determined, and then the position of the determined key point in the collected image can be updated to ensure Accuracy of keypoint recognition.
- the parameters of the image acquisition device include the internal parameters of the cameras in the camera array, and according to the position of the key point of the target object in the first image and the parameters of the acquisition device corresponding to the first image, it is determined that the key point is at
- the position in the world coordinate system includes: determining the position of the key point in the world coordinate system according to the position of the key point of the target object in the first image and the internal parameters of the cameras in the camera array.
- the target object includes a human face.
- the target objects of this application are not limited to human faces, but can also be human hands, human bodies, etc.
- the position of the key point in the first image is obtained through a key point calibration model, and the key point calibration model is obtained by training in the following manner: acquiring other images and determining the position of the key point in the other image; The position of the determined key point in other images is used as the first training target, and the key point calibration model is trained according to other images until the difference value of the key point calibration model obtained by the key point in other images and the first training target converges .
- the accuracy of the key point calibration model to predict the position of the key point in the collected image can be improved, so that the prediction ability of the model can be improved with the increase of the input sample data.
- the training method also includes: taking the depth of the key point in multiple captured images as the second training target, and training the key point calibration model according to the multiple captured images until the depth obtained by the key point calibration model is The difference value with the second training target converges, wherein the depth of the second training target is obtained according to the positions of the key points in the world coordinate system and the captured angles of the target objects in the multiple captured images.
- the depth of key points can be obtained, and the key point calibration model has the function of predicting the depth of key points.
- the second aspect of the present application provides a key point calibration device, including: a transceiver module and a processing module,
- the transceiver module is used to acquire multiple captured images and parameters of the capture device corresponding to the multiple captured images.
- the target objects in the multiple captured images have the same posture and are collected from different angles.
- the multiple captured images include the first image and other images.
- the captured angle of the target object in the first image is smaller than the preset threshold, and the first image includes at least two images;
- the processing module is used to correspond to the position of the key point of the target object in the first image
- the parameters of the acquisition device determine the position of the key point in the world coordinate system; the processing module is also used to determine the position of the key point in other images according to the parameters of the acquisition device corresponding to other images and the position of the key point in the world coordinate system Location.
- the multiple acquired images are images with standardized sizes.
- the processing module is specifically configured to calculate the position of the key point in the world coordinates by triangulation according to the position of the key point in at least two first images and the parameters of the image acquisition device corresponding to the first image. position under the system.
- the processing module is also used to: calibrate the position of the key point in the world coordinate system, so that the position of the key point in the world coordinate system is located in the key area of the target object; the processing module is also used to: according to The positions of the key points in the world coordinate system after calibration and the parameters of the acquisition devices corresponding to the multiple captured images are updated to update the positions of the key points in the multiple captured images.
- the parameters of the image acquisition device include the internal parameters of the cameras in the camera array, and the processing module is specifically configured to determine the key The position of the point in the world coordinate system.
- the target object includes a human face.
- the position of the key point in the first image is obtained through a key point calibration model, and the transceiver module is also used to obtain other images and the determined position of the key point in other images; the processing module is also used to The position of the determined key point in other images is used as the first training target, and the key point calibration model is trained according to other images until the difference value of the key point calibration model obtained by the key point in other images and the first training target converges .
- the processing module is also used to: take the depth of the key point in multiple captured images as the second training target, train the key point calibration model according to the multiple captured images, until the key point calibration model obtains The difference between the depth and the second training target converges, wherein the depth of the second training target is obtained according to the positions of the key points in the world coordinate system and the captured angles of the target objects in the multiple captured images.
- a computing device including: a processor, the processor is coupled to a memory, and the memory is used to store programs or instructions, and when the programs or instructions are executed by the processor, the computing device executes the first method of the present application.
- the fourth aspect of the present application provides a computer-readable storage medium, in which program code is stored, and when the program code is executed by a terminal or a processor in the terminal, the first aspect of the present application and its possible The method provided by the implementation of .
- the fifth aspect of the present application provides a computer program product.
- the program code contained in the computer program product is executed by the processor in the terminal, the method provided in the first aspect of the present application and its possible implementation manners are implemented.
- the sixth aspect of the present application provides a vehicle, including: the key point calibration device provided in the second aspect of the present application and any possible implementation thereof, the computing device provided in the third aspect of the present application, and the computing device provided in the fourth aspect of the present application.
- the seventh aspect of the present application provides a key point calibration system, including: an image acquisition device and a computing device, wherein the image acquisition device is used to collect multiple captured images and send multiple captured images to the computing device, and the computing device is used to execute The key point calibration method provided by the above first aspect and any possible implementation thereof.
- the computing device is further configured to send the calibrated key point information to the image acquisition device.
- FIG. 1 is a schematic diagram of an application scenario of a key point calibration method provided in an embodiment of the present application
- Fig. 2 is a flow chart of the key point calibration method provided by the embodiment of the present application.
- Fig. 3 is a block diagram of a key point calibration device provided by the embodiment of the present application.
- Fig. 4a is a schematic diagram of using triangulation to locate the coordinates of key points in the world coordinate system provided by the embodiment of the present application;
- Fig. 4b is a schematic diagram of using triangulation to locate the coordinates of the key point in the world coordinate system provided by the embodiment of the present application, wherein the spatial position of the key point is not at the intersection of the straight line O 1 p 1 and the straight line O 2 p 2 ;
- Fig. 5a is a flowchart of a face key point calibration method provided by an embodiment of the present application.
- Fig. 5b is a schematic diagram of the face key point calibration rule provided by the embodiment of the present application.
- Fig. 6 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
- one possible implementation is: obtain the initial face image, and after preprocessing the initial face image, obtain the face image to be detected; then use the first-level convolutional neural network The network predicts the key points of the face image to be detected to obtain the predicted key points of the face; then performs secondary convolutional neural network processing and regression processing on the predicted key points to obtain the key points of the target object, thereby improving the key points of the face. Calibration accuracy.
- This face key point calibration method utilizes the timing information between images, which means that the input images need to be several consecutive frames, so that there can be a gradual trend between images, if each image Without temporal correlation, it is impossible to precisely identify keypoints in an image.
- the embodiment of the present application provides a key point Calibration methods and devices.
- Fig. 1 shows an exemplary application scenario of the key point calibration method provided by the embodiment of the present application.
- the image acquisition equipment such as camera array 30 collects the images of people 40 at different angles under the current posture
- the images after the collection are transmitted to the server, such as computer 20, and after computer 20 receives the images, the images are processed.
- the key points are marked, and the information of the key points obtained after the calibration is stored in the memory.
- the image can also be uploaded to the server 10, and the server 10 can calibrate the key points of the image after receiving the image, and the information of the key points obtained after the calibration can be Stored in the cloud storage, or the information of the key points obtained after calibration can be transmitted back to the local terminal (such as computer, mobile phone, camera) or transmitted back to the local memory.
- the server may be a cloud server, or a local server, or a physical server, or a virtual server, which is not limited in this application.
- FIG. 2 shows a flow chart of the key point calibration method provided by the embodiment of the present application.
- the key point marking method provided in the embodiment of the present application may be executed by a terminal, such as a terminal such as a computer, or may be executed by a processor; the key point marking method provided in the embodiment of the present application may also be executed by a server, wherein the processing
- the processor can be a CPU, an image processor, or a general-purpose processor, etc.
- a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
- the software code of the calibration method for key points in FIG. 2 can be stored in a memory, and the software code is run through a terminal or a server, thereby realizing the calibration of key points of a human face.
- the calibration method of key points includes the following steps:
- Step S1 Obtain multiple captured images and parameters of the capture devices corresponding to the multiple captured images.
- the poses of the target objects in the multiple captured images are the same and the captured angles are different, and the multiple captured images include the first image and other images, wherein the target object in the first image is captured
- the acquisition angle is smaller than a preset threshold, and the first image includes at least two images.
- Target objects may include: human faces, human hands, and human bodies.
- the image acquisition device may include: a camera, a camera array, a mobile phone with a camera, and a computer with a camera.
- the target object is kept in a posture, so that the image acquisition devices collect images of the target object at different angles. For example, orbits can be set around the target object, Make the image acquisition device move along the track and collect the image of the target object at the same time, and record the acquisition angle when collecting; when there are multiple image acquisition devices, for example, when the image acquisition device is a camera array, make the camera array collect the target object at the same time Image.
- the types of cameras in the camera array can be the same or different, for example, the camera array can all use infrared (Infrared Rays, IR) cameras, all use red green blue (Red Green Blue, RGB) cameras or other cameras , and IR cameras and RGB cameras can also be mixed to achieve the diversity of image data, so that the key point calibration model can support diverse image data.
- IR Infrared Rays
- RGB red green blue
- IR cameras and RGB cameras can also be mixed to achieve the diversity of image data, so that the key point calibration model can support diverse image data.
- the parameters of the image acquisition device include internal parameters of the cameras in the camera array.
- the internal reference of the camera also known as the camera projection matrix, is the parameter equipped with each calibrated camera.
- the camera projection matrix uses the camera projection matrix to compare the three-dimensional coordinates of the collected target object in the world coordinate system to be compared with the two-dimensional coordinates of the collected image. Coordinates are converted.
- the value is 0;
- the value is 1; is the ratio of the focal length of the camera to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the focal length of the camera to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinate of the intersection point of the optical axis of the camera and the image in the image.
- the server When the server is local, it is only necessary to transmit the image collected by the camera to the local server through data cable or signal transmission, and the local server can perform steps S1-S3 according to the parameters of the image and the camera.
- the server When the server is a server, it is also necessary to transmit the projection matrix of the camera corresponding to each image to the server, and the server performs step S1-step S3 according to the parameters of the image and the camera.
- the plurality of acquired images are size-normalized images. For example, the area where the target object is located in the original image collected by the image acquisition device can be intercepted, and then the area where the target object is located is unified into an image of the same size according to the preset size, which is convenient for subsequent identification of key points of the target object in the first image. identification of the location. Size normalization can be implemented by neural networks, for example, by image segmentation models such as Regions with CNN features (RCNN) and Region Proposal Network (RPN).
- RCNN Regions with CNN features
- RPN Region Proposal Network
- the captured angle of the target object can be obtained through a deflection angle identification model. Since the multiple acquired images show different angles of the target object in the current posture, some images must be able to completely display all the features of the target object. For an image that can completely and accurately display the characteristics of the target to the object, the positioning of its key points in the first image must be more accurate than other images, and the key points of at least two first images whose captured angles are smaller than the preset value are used. It is more accurate to calculate the position of the key point in the world coordinate system by using the image position of the image.
- Step S2 Determine the position of the key point in the world coordinate system according to the position of the key point of the target object in the first image and the parameters of the acquisition device corresponding to the first image.
- the position of the key point of the target object in the first image is obtained through a key point calibration model
- the key point calibration model may be a neural network.
- the neural network may be a volume product neural network, residual network, etc., which are not limited in this application.
- the world coordinates of the key point are solved by triangulation position under the system.
- the position of the key point in the image and the position of the key point in the world coordinate system can be represented by coordinates, wherein the coordinates of any key point in the at least two first images are expressed as : (u 1 , v 1 , 1) and (u 2 , v 2 , 1), the camera projection matrices corresponding to at least two first images are expressed as:
- M1 The value is 0; The value is 1; is the ratio of the focal length of the camera of one of the first images to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the camera focal length of one of the first images to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinate in the image of the intersection point of the optical axis of the camera of one of the first images and the image, and Z c1 represents the zoom factor of the camera corresponding to one of the first images.
- the value is 0;
- the value is 1; is the ratio of the focal length of the camera of the other first image to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the camera focal length of another first image to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinate in the image of the intersection point of the optical axis of the camera of the other first image and the image, and Z c2 represents the zoom factor of the camera corresponding to the other first image.
- Formula 2 can be decomposed to get:
- Formula 4 and Formula 6 constitute four equations, and only have three unknowns, so the coordinates (X, Y, Z, 1) of the key point P in the world coordinate system can be calculated.
- Step S3 Determine the position of the key point in the other image according to the parameters of the acquisition device corresponding to the other image and the position of the key point in the world coordinate system.
- the coordinates of any key point obtained in step S2 in other images are (u, v, 1), and the camera projection matrix corresponding to other images is:
- the value is 0;
- the value is 1; is the ratio of the focal length of the camera of other images to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the camera focal length of other images to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinates of the intersection point of the optical axis of the camera of other images and other images in other images, and calculates the image coordinates of key points in other images according to the following formula 7, where Z cb is the camera zoom factor corresponding to other images:
- the method further includes: calibrating the position of the key point in the world coordinate system, so that the position of the key point in the world coordinate system is located in the key area of the target object; according to the calibrated The position of the key point in the world coordinate system and the parameters of the acquisition device corresponding to the plurality of acquired images are used to update the position of the key point in the plurality of acquired images.
- the area within the first distance around the real position of the key point in the world coordinate system is a key area, and the key area is located on the target object; the position of the key point in the world coordinate system can be calibrated by least squares method, gradient descent method, Newton's method, and iterative nonlinear least squares method.
- step S2 The calculated position of the key point 31 in the world coordinate system may not be located on the target object, for example, it is located in front of the tip of the nose, which deviates from its true position. Therefore, the position of the key point in the world coordinate system needs to be calibrated. Since the position of the calibrated key point in the world coordinate system has changed, it can be seen from the calculation of the above formula that the position of the key point in the first image and other images will also be updated.
- the key point calibration model can identify the position of the key point in the first image very accurately, the calibration of its position in the world coordinate system can be omitted.
- the key point recognition model can identify the position of the key point in the first image very accurately, the position of the calibrated key point in the world coordinate system remains unchanged, so that the updated key point in the multiple acquired images The location in is the same as it was before the update.
- the position of the key point in the first image can be obtained through a key point calibration model, and the key point calibration model is obtained by training in the following manner: acquiring the other images and determining the key point in the Positions in other images; using the determined positions of the key points in the other images as the first training target, and training the key point marking model according to the other images until the key point marking model obtains all The position of the key point in the other image and the difference value of the first training target converge.
- the key point calibration model can be a key point convolutional neural network model
- the training samples can be multiple other images with key point positions
- a form of expression of the key point position can be the coordinates of the key point in other images , that is, the coordinates of the key point in the other image correspond to the other image, and are used to identify the key point in the other image.
- the parameters of the keypoint convolutional neural network model can be initialized, and then the multiple other images are input to the keypoint convolutional neural network model, and the multiple other images are processed by the keypoint convolutional neural network model and output The coordinates of the key points in multiple other images; compare the coordinates of the output key points in multiple other images with the coordinates of the key points of the training sample in multiple other images, for example, perform corresponding operations to obtain a Difference value, adjust the initialized key point convolutional neural network model according to the difference value, process other images of the training sample with the adjusted key point convolutional neural network model, and then obtain a new difference value, and iterate repeatedly , until the difference value converges; you can also set a preset condition that the difference value should meet.
- the network model processes other images of the training samples, and iterates repeatedly until the difference value satisfies the preset condition.
- the training samples can also be a plurality of acquired images with updated key points in the acquired images, and the coordinates of the updated key points in the acquired images correspond to the acquired images, and are used to identify the positions in the acquired images. the key points.
- the updated position of the key point in the collected image is used as the third training target to train the key point calibration model.
- the training method is the same as the above training method, and will not be repeated here.
- the key point calibration model can also be obtained by training in the following manner: taking the depth of the key point in the multiple captured images as the second training target, and training the key point according to the multiple captured images. The key point calibration model until the difference between the depth obtained by the key point calibration model and the second training target converges, wherein the depth is based on the position of the key point in the world coordinate system and the multiple The captured angle of the target object in the captured image is obtained.
- a representation of the position of the key point in the world coordinate system is the coordinate of the key point in the world coordinate system. After obtaining the coordinates of the key point in the world coordinate system, according to the acquisition device in the world coordinate The coordinates under the system can get the depth of the key point relative to the acquisition device.
- the multiple captured images are human face images
- one of the key points is selected from the multiple key points as the reference key point, and the depth of the reference key point relative to the acquisition device and the depth of all key points are respectively used. Subtract the depth relative to the acquisition device, and then obtain the depth of the key points of each face image according to the captured angle. For example, in the face frontal image (i.e.
- the nose tip key point 31 is selected as the reference key point, so that the Z values of key points 1-key point 68 are respectively compared with the Z values of the nose tip key point 31
- the Z value difference of all key points relative to the nose tip key point 31 is obtained, and the Z value difference is used as the depth of the key point to determine the depth information of the key points on the map.
- the depth of the key point can be obtained according to the captured angle of the other image and the position of the key point relative to the nose tip key point 31 .
- the software code of the face key point calibration method in step S100-step S180 in FIG. 5a can be stored in the memory, and the processor or server of the electronic device runs the software code to realize the face key point calibration.
- the calibration method of human face key point comprises the following steps:
- Step S100 Obtain the original images of the person in the same posture at different angles and the internal parameters of the cameras corresponding to the original images.
- the original image can be collected by a camera array
- the camera array is composed of multiple cameras, and is arranged around the person to be collected, and each camera in the camera array has a different angle relative to the person to be collected, so as to ensure that the camera array At least two cameras can face the face of the person to be collected, so that all facial features of the person to be collected can be captured by the cameras.
- the types of cameras in the camera array can be the same or different.
- the camera array can use both IR cameras or RGB cameras; the camera array can also use IR cameras and RGB cameras mixedly. Since the internal parameters of each camera have been determined when the camera leaves the factory, the camera projection matrix can be obtained by calculating the parameters of the camera.
- the server used to calibrate key points is a local server, it is only necessary to transmit the original image to the local server through a data cable, and the local server can directly obtain the parameters of the camera corresponding to the original image.
- the server used to calibrate key points is a server, it is also necessary to send the camera parameters corresponding to each image to the server for use in subsequent steps.
- the camera parameters may include internal parameters for each camera in the camera array.
- the internal reference of the camera also known as the camera projection matrix, is the parameter equipped with each calibrated camera. Using the camera projection matrix, the coordinates of the captured target object in the world coordinate system can be converted to the coordinates on the image.
- Step S110 intercepting the face area in the original image, and adjusting the image of the intercepted face area to a preset size to obtain a face image to be recognized.
- the face image to be recognized obtained in step S110 is the size-standardized captured image described in step S1-step S3. Since the original images of the current person collected by the camera array at different angles also include the body parts of the person, it is necessary to locate the area where the face is located from the original image, intercept the face area from the original image, and convert the intercepted face area The image is adjusted to a preset size to obtain the face image to be recognized.
- the original image collected by the camera array can be input into the image segmentation model, and the image segmentation model intercepts the face area in the original image, and then the image of the face area is unified to obtain the size of the face image to be recognized as same size.
- Step S120 Use the captured angle recognition model to identify the posture of the face in the face image, obtain the captured angle of the human face, and select at least two images with the captured angle smaller than the preset value as the first frontal face image and the second frontal face image.
- the first front face image and the second front face image are at least two first images of steps S1-S3 in the embodiment.
- only two first front face images and second front face images are selected as the first images, but this application is not limited thereto, and three or more images may also be selected.
- Step S130 Using the key point calibration model to identify key points in the first front face image and the second front face image, and obtain image coordinates of the key points in the first front face image and the second front face image.
- all the face images obtained in step S120 can also be input into the key point calibration model to obtain the image coordinates of the key points in the face image.
- the acquired angle recognition model and the key point calibration model may be the same neural network, such as a convolutional neural network, a residual network, etc., which is not limited in this application.
- the acquired angle recognition model and the key point calibration model are the same neural network
- multiple face images are input into the same neural network, and the coordinates of the key points in each face image and the coordinates of the face in each face image are output
- the captured angle of the face at least two first front face images and second front face images whose face capture angles are smaller than a preset value are selected from a plurality of face images.
- Fig. 5b shows a face key point marking rule provided by the embodiment of the present application, which needs to mark 68 key points on the face image.
- the key point calibration model After the face image is input to the key point calibration model, the key point calibration model performs key point calibration on the face according to the rules shown in Figure 5b, and outputs the image coordinates (u i , v i , 1) of each key point respectively, Among them, i represents the ith key point identified in the face image.
- the camera array can surround the person being collected, at least two face images whose captured angles are smaller than the preset value can be selected from multiple face images, which can fully display the facial features of the face, without the rotation angle of the face. If it is too large, the facial features will be blocked or not fully displayed.
- Step S140 According to the parameters of the camera corresponding to the first front face image and the second front face image and the image coordinates of the key points in the first front face image and the second front face image, use the triangulation method to determine the in-world position of the key point The initial coordinates in the coordinate system.
- Fig. 4a shows a schematic diagram of determining initial coordinates of key points in the world coordinate system by triangulation.
- first image point on the first camera C 1 is p 1
- second image point on the second camera C 2 is p 2
- the optical center of C 1 is O 1
- the optical center of the second camera C 2 is O 2 .
- the position of the key point P in the world coordinate system is the distance between the straight line O 1 p 1 and the straight line O 2 p 2 intersection.
- step S130 the first image coordinates (u 1 , v 1 , 1) of any key point in the first front face image and the second image coordinates (u 2 , v 2 , 1).
- step S100 the internal reference of each camera in the camera array is known, and the internal reference of the first camera corresponding to the first front face image is the first camera projection matrix M 1 :
- the internal reference of the second camera corresponding to the second front face image is the second camera projection matrix M 2 :
- M1 The value is 0; The value is 1; is the ratio of the focal length of the camera of the first front face image to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the focal length of the camera of the first front face image to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinate in the image of the intersection point of the optical axis of the camera of the first front face image and the image, and Z c1 represents the scaling factor of the camera corresponding to the first front face image.
- the value is 0;
- the value is 1; is the ratio of the focal length of the camera of the second front face image to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the focal length of the camera of the second front face image to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinate in the image of the intersection point of the optical axis of the camera of the second front face image and the image, and Z c2 represents the scaling factor of the camera corresponding to the second front face image.
- Formula 2 can be decomposed to get:
- Formula 4 and Formula 6 constitute four equations, and only have three unknowns, so the coordinates (X, Y, Z, 1) of the key point P in the world coordinate system can be calculated.
- the key point calibration model can more accurately identify the positions of the key points in the first front face image and the second front face image. Therefore, it is more reliable to calculate the coordinates of the key points in the world coordinate system by using the first image coordinates and the second image coordinates of the key points in the first front face image and the second front face image.
- Step S150 Calibrate the initial coordinates of the key points in the world coordinate system, and obtain the final coordinates of the key points in the world coordinate system.
- the least square method can be used to calibrate the coordinates of the key points in the world coordinate system, but the calibration method is not limited to the least square method, and can also be the gradient descent method, Newton method, and iterative nonlinear least square method.
- step S150 needs to be executed. If the model can identify the position of the key point in the first front face image and the second front face image very accurately, the intersection of the line O 1 p 1 and the line O 2 p 2 is the position of the key point P in the world coordinate system , step S150 can be omitted.
- step S150 the final coordinates (X, Y, Z, 1) of the 68 key points of the front face image shown in FIG. 5b in the world coordinate system can be respectively obtained.
- Step S160 Determine the depth of the key point in each face image according to the final coordinates of the key point in the world coordinate system and the captured angle of the face.
- step S150 the coordinates (X, Y, Z, 1) of any key point in the face image in the world coordinate system can be obtained, wherein the Z of the coordinates (X, Y, Z, 1) in the world coordinate system
- the value is the depth of the keypoint relative to the camera corresponding to the face image. Since the face image is obtained through size standardization on the basis of the original image of the person captured by the camera, the size of each face image is the same, which will lead to different distances between the two cameras and the face. From the face image, the depth of the face relative to the camera is also the same. If the Z value of the coordinates of the key points in the world coordinate system is directly used as the depth to train the model, the model will not be able to accurately identify the depth of the key points.
- the depth of the key points can be obtained according to the captured angle of the face and the position of the key points relative to the nose tip key point 31 .
- the depth of the determined key points can also be used as the training target, and the above-mentioned multiple face images can be used to train the key point calibration model, so that the key point calibration model in step S130 can further have the ability to predict key points depth capabilities.
- the depth of key points obtained in step S160 can also be stored in the memory for use in other recognition operations; the depth of key points obtained in step S160 can also be input into other neural networks for training, and this application does not do this limit.
- Step S170 Determine the image coordinates of the key points in the face image according to the final coordinates of the key points in the world coordinate system and the parameters of the camera corresponding to the face image.
- step S170 when the final coordinates of the key points in the world coordinate system are different from the initial coordinates, step S170 will update the image coordinates of the key points obtained in step S130 in the first front face image and the second front face image .
- step S170 when the final coordinates of key points in the world coordinate system are different from the initial coordinates and in step S130, when all face images obtained in step S120 are input into the key point calibration model, step S170 will update the key Image coordinates of points in all face images.
- the key point calibration model may not be able to accurately identify the position of the key point corresponding to the corner of the eye, resulting in the deviation of the key point corresponding to the corner of the eye.
- the coordinates of the point in the world coordinate system and the parameters of the camera corresponding to the face image can determine the coordinates of the key point in the face image.
- step S150 or step S140 the coordinates (X, Y, Z, 1) of the key point in the world coordinate system have been obtained. Since the coordinates of the key point in the world coordinate system are invariable, it is possible to use the coordinates of the key point in the world coordinate system The coordinates in the coordinate system determine the image coordinates of key points in other face images.
- (u, v, 1) are the image coordinates of key points in other face images
- (X, Y, Z, 1) the coordinates of the key points calculated in step S150 or step S140 in the world coordinate system
- the value is 0;
- the value is 1; is the ratio of the focal length of the camera of other face images to the width of the image pixel in the x-axis direction, and the x-axis is parallel to the u-axis; is the ratio of the camera focal length of other face images to the width of the image pixel in the y-axis direction, and the y-axis is parallel to the v-axis; is the coordinates of the intersection point of the optical axis of the camera of other face images and the image in the image, and calculates the coordinates of key points in other face images according to the following formula 7, and Z cb is the camera zoom factor corresponding to other face images.
- the key point calibration model can be trained with the face images marked with image coordinates, and the parameters of the key point calibration model can be updated to realize the key point calibration model. Iterative optimization, and then improve the key point calibration model's ability to identify the key points of the face image.
- step S130 when the key points in the first front face image and the second front face image recognized by the key point calibration model are relatively accurate, the initial position of the key points calculated in step S140 in the world coordinate system The coordinates are located in the key area, and step S150 can be omitted, or the final coordinates of the key points obtained through step S150 in the world coordinate system may be the same as the initial coordinates obtained before calibration, so in step S170, the determined key points
- the image coordinate images in the first front face image and the second front face image may be the same as the image coordinates obtained in step S130.
- Step S180 Map the image coordinates of the key points in the face image to the original image, and obtain the image coordinates of the key points in the original image.
- the image coordinates of the key points in the face image are not the image coordinates of the key points in the original image.
- the calibrated image coordinates of the face image are mapped to the original image, and the image coordinates of the key points in the original image are obtained through coordinate transformation.
- step S170 and step S160 the image coordinates of key points in the face image and the depth of key points have been obtained.
- the training method can be as follows : Initialize the parameters of the key point calibration model, then input the face image into the key point calibration model, after the face image is processed by the key point calibration model, output the image coordinates and/or key point depth of the key points in the face image ; Compare the coordinates and/or depth of key points of the output key points in the face image with the coordinates and/or depth of key points marked by the training sample in the face image, for example, perform corresponding operations , get a difference value, adjust the initialized key point calibration model according to the difference value; use the adjusted key point calibration model
- the model processes other images of the training samples, and then calculates a new difference value, and judges whether the new difference value meets the preset condition. If the preset condition is met, the target key point convolutional neural network model is obtained. If not , then continue to iterate until this preset condition is met.
- the image coordinates and depth of key points in the face image can be guaranteed to be accurate in step S170 and step S150, the image coordinates of the key point in the face image are used to train the key
- the point calibration model can improve the accuracy of the key point calibration model for key point identification, and at the same time enable the key point calibration model to have the ability to predict the depth of key points.
- step S180 the coordinates of the key point in the original image and the depth of the key point can be stored in a memory or folder for subsequent use;
- the server is a server, the key
- the coordinates of the points in the image and the depth of the key points are stored in the cloud storage for subsequent use, and the coordinates of the key points in the image and the depth of the key points can also be sent back to the local terminal (camera, mobile phone, computer, etc.) Subsequent use.
- the expression of the coordinates of the key point in the image or the image coordinates of the key point in the image refers to the row and column of the image pixel corresponding to the key point, which is used in the embodiments of the present application (u , v) said.
- Fig. 3 shows a schematic diagram of the modules of the key point calibration device provided by the embodiment of the present application.
- the key point marking device provided in the embodiment of the present application includes: a transceiver module 1000 and a processing module 2000 .
- the transceiver module 1000 is used to acquire multiple captured images and the parameters of the capture devices corresponding to the multiple captured images.
- the target objects in the multiple captured images have the same posture and are captured at different angles.
- the multiple captured images The images include a first image and other images, wherein the captured angle of the target object in the first image is smaller than a preset threshold, and the first image includes at least two images;
- the processing module 2000 is configured to determine the position of the key point in the world coordinate system according to the position of the key point of the target object in the first image and the parameters of the acquisition device corresponding to the first image;
- the processing module 2000 is further configured to determine the position of the key point in the other image according to the parameters of the acquisition device corresponding to the other image and the position of the key point in the world coordinate system.
- the plurality of acquired images are size-normalized images.
- the processing module 2000 is specifically configured to, according to the position of the key point in the at least two first images and the parameters of the image acquisition device corresponding to the first image, solve the problem of the key point by triangulation.
- the processing module 2000 is further configured to: calibrate the position of the key point in the world coordinate system, so that the position of the key point in the world coordinate system is located in the key area of the target object;
- the processing module 2000 is further configured to: according to the calibrated position of the key point in the world coordinate system and the parameters of the acquisition device corresponding to the multiple captured images, update the key point in the multiple captured images. position in the image.
- the parameters of the image acquisition device include internal references of the cameras in the camera array
- the processing module is specifically configured to, according to the position of the key point of the target object in the first image and the camera array Intrinsic reference of the camera to determine the position of the key point in the world coordinate system.
- the target object includes a human face.
- the position of the key point in the first image is obtained through a key point calibration model
- the transceiver module 1000 is further configured to acquire the other images and determine the position of the key point in the other position in the image
- the processing module 2000 is further configured to use the determined position of the key point in the other image as the first training target, and train the key point marking model according to the other image until the The position of the key point in the other image obtained by the key point calibration model converges with the difference value of the first training target.
- the processing module 2000 is further configured to: use the depth of the key point in the plurality of images as a second training target, train the key point calibration model according to the plurality of collected images, Until the difference between the depth obtained by the key point calibration model and the second training target converges, wherein the depth is based on the position of the key point in the world coordinate system, the The captured angle of the target object is obtained.
- the above-mentioned modules that is, the transceiver module 1000 and the processing module 2000 are configured to execute relevant steps of the above-mentioned method.
- the transceiver module 1000 is used to execute the related content of step S1 and step S100
- the processing module 2000 is used to execute the related content of step S2, step S3, step S110 to step S180, etc.
- the key point labeling device is presented in the form of a module.
- a “module” here may refer to an application-specific integrated circuit (ASIC), a processor and memory executing one or more software or firmware programs, an integrated logic circuit, and/or other devices that can provide the above functions .
- ASIC application-specific integrated circuit
- the above transceiver module 1000 and processing module 2000 can be implemented by the computing device shown in FIG. 6 .
- FIG. 6 is a schematic structural diagram of a computing device 1500 provided by an embodiment of the present application.
- the computing device 1500 includes: a processor 1510 and a memory 1520 coupled to the processor 1510.
- the memory 1520 is used to store programs or instructions. When the programs or instructions are executed by the processor, the computing device executes the present application.
- the key point calibration method provided in the embodiment.
- the memory 1520 may be a storage unit inside the processor 1510, or an external storage unit independent of the processor 1510, or may include a storage unit inside the processor 1510 and an external storage unit independent of the processor 1510. part.
- the computing device 1500 may further include a bus and a communication interface (not shown in the figure).
- the memory 1520 and the communication interface may be connected to the processor 1510 through a bus.
- the bus may be a Peripheral Component Interconnect (PCI) bus or an Extended Industry Standard Architecture (Extended Industry Standard Architecture, EISA) bus or the like.
- PCI Peripheral Component Interconnect
- EISA Extended Industry Standard Architecture
- the bus can be divided into address bus, data bus, control bus and so on.
- the processor 1510 may be implemented by a device such as a CPU.
- the processor can also be other general-purpose processors, digital signal processors (digital signal processors, DSPs), application specific integrated circuits (application specific integrated circuits, ASICs), off-the-shelf programmable gate arrays (field programmable gate arrays, FPGAs) or other Programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
- a general-purpose processor may be a microprocessor, or the processor may be any conventional processor, or the like.
- the processor 1510 uses one or more integrated circuits for executing related programs, so as to implement the technical solutions provided by the embodiments of the present application.
- the memory 1520 may include read-only memory and random-access memory, and provides instructions and data to the processor 1510 .
- a portion of processor 1510 may also include non-volatile random access memory.
- processor 1510 may also store device type information.
- the processor 1510 executes the computer-executed instructions in the memory 1520 to perform the automatic marking of image key points of the present application.
- the computing device 1500 may correspond to a corresponding body executing the methods according to the various embodiments of the present application, and the above-mentioned and other operations and/or functions of the modules in the computing device 1500 are for realizing the present invention For the sake of brevity, the corresponding processes of the methods in the embodiments are not repeated here.
- the disclosed systems, devices and methods may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division. In actual implementation, there may be other division methods.
- multiple units or components can be combined or May be integrated into another system, or some features may be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the functions described above are realized in the form of software function units and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present application is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in the various embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
- the embodiment of the present application also provides a computer-readable storage medium, on which a computer program is stored.
- a computer program When the program is executed by a processor, it is used to execute a method for generating a variety of questions.
- the method includes the methods described in the above-mentioned embodiments. at least one of the options.
- the computer storage medium of the embodiment of the present application can adopt any combination of one or more computer-readable mediums.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples (non-exhaustive list) of computer readable storage media include: electrical connections with one or more leads, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), Erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
- a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer readable signal medium may include a data signal carrying computer readable program code in baseband or as part of a carrier wave. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device. .
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for performing the operations of the present application may be written in one or more programming languages or combinations thereof, including object-oriented programming languages—such as Java, Smalltalk, C++, and conventional Procedural Programming Language - such as "C" or a similar programming language.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through the Internet using an Internet service provider). connect).
- LAN local area network
- WAN wide area network
- connect such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- the embodiment of the present application further provides a computer program product, and when the program code contained in the computer program product is executed by the processor in the terminal, the key point calibration method provided in the above embodiment is implemented.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
La présente demande se rapporte au domaine de l'intelligence artificielle et concerne en particulier un procédé d'étalonnage de points-clés. Le procédé consiste à : acquérir une pluralité d'images collectées et de paramètres de dispositifs de collecte correspondant à la pluralité d'images collectées, les objets cibles dans la pluralité d'images collectées ayant la même position et différents angles de collecte, la pluralité d'images collectées comprenant une première image et d'autres images, l'angle de collecte de l'objet cible dans la première image étant inférieur à une valeur seuil prédéfinie, et la première image comprenant au moins deux images ; en fonction des positions de points-clés des objets cibles qui sont dans la première image, ainsi que des paramètres du dispositif de collecte correspondant à la première image, déterminer les positions des points-clés qui sont dans un système de coordonnées universelles ; et en fonction des paramètres des dispositifs de collecte correspondant aux autres images, ainsi que des positions des points-clés qui sont dans le système de coordonnées universelles, déterminer les positions des points-clés qui se trouvent dans les autres images. Un étalonnage automatique des points-clés est effectué, ce qui permet de réduire la consommation de ressources humaines ; et la précision de l'étalonnage des points-clés est assurée, ce qui permet de mettre en pratique un résultat d'étalonnage.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180001870.9A CN113454684A (zh) | 2021-05-24 | 2021-05-24 | 一种关键点标定方法和装置 |
PCT/CN2021/095539 WO2022246605A1 (fr) | 2021-05-24 | 2021-05-24 | Procédé et appareil d'étalonnage de points-clés |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/095539 WO2022246605A1 (fr) | 2021-05-24 | 2021-05-24 | Procédé et appareil d'étalonnage de points-clés |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022246605A1 true WO2022246605A1 (fr) | 2022-12-01 |
Family
ID=77819505
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/095539 WO2022246605A1 (fr) | 2021-05-24 | 2021-05-24 | Procédé et appareil d'étalonnage de points-clés |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113454684A (fr) |
WO (1) | WO2022246605A1 (fr) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114220149A (zh) * | 2021-12-09 | 2022-03-22 | 东软睿驰汽车技术(沈阳)有限公司 | 一种头部姿势真值的获取方法、装置、设备及存储介质 |
CN115620094B (zh) * | 2022-12-19 | 2023-03-21 | 南昌虚拟现实研究院股份有限公司 | 关键点的标注方法、装置、电子设备及存储介质 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003279310A (ja) * | 2002-03-22 | 2003-10-02 | Canon Inc | 位置姿勢補正装置、及び位置姿勢補正方法 |
CN110091891A (zh) * | 2019-05-05 | 2019-08-06 | 中铁检验认证中心有限公司 | 高速列车动态限界测量方法、装置、存储介质及电子设备 |
CN110738143A (zh) * | 2019-09-27 | 2020-01-31 | Oppo广东移动通信有限公司 | 定位方法及装置、设备、存储介质 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110447220B (zh) * | 2017-03-21 | 2021-03-09 | 奥林巴斯株式会社 | 校准装置、校准方法、光学装置、摄影装置以及投影装置 |
CN111819568B (zh) * | 2018-06-01 | 2024-07-09 | 华为技术有限公司 | 人脸旋转图像的生成方法及装置 |
CN111160178B (zh) * | 2019-12-19 | 2024-01-12 | 深圳市商汤科技有限公司 | 图像处理方法及装置、处理器、电子设备及存储介质 |
CN112767489B (zh) * | 2021-01-29 | 2024-05-14 | 北京达佳互联信息技术有限公司 | 一种三维位姿确定方法、装置、电子设备及存储介质 |
-
2021
- 2021-05-24 CN CN202180001870.9A patent/CN113454684A/zh active Pending
- 2021-05-24 WO PCT/CN2021/095539 patent/WO2022246605A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2003279310A (ja) * | 2002-03-22 | 2003-10-02 | Canon Inc | 位置姿勢補正装置、及び位置姿勢補正方法 |
CN110091891A (zh) * | 2019-05-05 | 2019-08-06 | 中铁检验认证中心有限公司 | 高速列车动态限界测量方法、装置、存储介质及电子设备 |
CN110738143A (zh) * | 2019-09-27 | 2020-01-31 | Oppo广东移动通信有限公司 | 定位方法及装置、设备、存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN113454684A (zh) | 2021-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019161813A1 (fr) | Procédé, appareil et système de reconstruction tridimensionnelle de scène dynamique, serveur et support | |
CN109934115B (zh) | 人脸识别模型的构建方法、人脸识别方法及电子设备 | |
WO2018188453A1 (fr) | Procédé de détermination d'une zone de visage humain, support de stockage et dispositif informatique | |
WO2018108129A1 (fr) | Procédé et appareil destinés à l'identification d'un type d'objet, et dispositif électronique | |
WO2017054455A1 (fr) | Procédé et système de détection d'ombre de cible en mouvement dans une vidéo de surveillance | |
WO2022246605A1 (fr) | Procédé et appareil d'étalonnage de points-clés | |
JP2017033469A (ja) | 画像識別方法、画像識別装置及びプログラム | |
CN107705322A (zh) | 运动目标识别跟踪方法和系统 | |
CN111062263B (zh) | 手部姿态估计的方法、设备、计算机设备和存储介质 | |
JP2018026131A (ja) | 動作解析装置 | |
WO2022237153A1 (fr) | Procédé de détection de cible et procédé d'entraînement de modèle correspondant, ainsi qu'appareil, support et produit de programme associés | |
CN112200056B (zh) | 人脸活体检测方法、装置、电子设备及存储介质 | |
WO2023060964A1 (fr) | Procédé d'étalonnage et appareil, dispositif, support de stockage et produit-programme informatique associés | |
US11663463B2 (en) | Center-biased machine learning techniques to determine saliency in digital images | |
CN111144207A (zh) | 一种基于多模态信息感知的人体检测和跟踪方法 | |
WO2022165722A1 (fr) | Procédé, appareil et dispositif d'estimation de profondeur monoculaire | |
WO2023279584A1 (fr) | Procédé de détection de cible, appareil de détection de cible et robot | |
WO2022252118A1 (fr) | Procédé et appareil de mesure de port de tête | |
US20240104769A1 (en) | Information processing apparatus, control method, and non-transitory storage medium | |
CN108550167B (zh) | 深度图像生成方法、装置及电子设备 | |
WO2023093086A1 (fr) | Procédé et appareil de suivi de cible, procédé d'entraînement et appareil pour un modèle associé, et dispositif, support et produit programme informatique | |
CN111353325A (zh) | 关键点检测模型训练方法及装置 | |
CN110781712B (zh) | 一种基于人脸检测与识别的人头空间定位方法 | |
WO2022247126A1 (fr) | Procédé et appareil de localisation visuelle, dispositif, support et programme | |
KR102454715B1 (ko) | 영상에 기반하여 동물의 승가 행위를 검출하는 장치 및 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21942194 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21942194 Country of ref document: EP Kind code of ref document: A1 |