CN117441190A - Position positioning method and device - Google Patents
Position positioning method and device Download PDFInfo
- Publication number
- CN117441190A CN117441190A CN202280005358.6A CN202280005358A CN117441190A CN 117441190 A CN117441190 A CN 117441190A CN 202280005358 A CN202280005358 A CN 202280005358A CN 117441190 A CN117441190 A CN 117441190A
- Authority
- CN
- China
- Prior art keywords
- user
- image
- coordinate system
- model
- acquisition device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 93
- 238000005457 optimization Methods 0.000 claims abstract description 81
- 230000000875 corresponding effect Effects 0.000 claims description 88
- 238000003860 storage Methods 0.000 claims description 30
- 230000006870 function Effects 0.000 claims description 25
- 238000006243 chemical reaction Methods 0.000 claims description 21
- 238000004590 computer program Methods 0.000 claims description 10
- 230000002596 correlated effect Effects 0.000 claims description 3
- 210000003128 head Anatomy 0.000 description 37
- 238000010586 diagram Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 21
- 210000001508 eye Anatomy 0.000 description 14
- 238000012545 processing Methods 0.000 description 14
- 238000005265 energy consumption Methods 0.000 description 12
- 238000004891 communication Methods 0.000 description 7
- 230000000694 effects Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 5
- 210000005069 ears Anatomy 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 3
- 210000001331 nose Anatomy 0.000 description 3
- 230000003068 static effect Effects 0.000 description 3
- 238000013519 translation Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 208000002173 dizziness Diseases 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 210000000697 sensory organ Anatomy 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 1
- 241000228740 Procrustes Species 0.000 description 1
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000000214 mouth Anatomy 0.000 description 1
- 210000004279 orbit Anatomy 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001568 sexual effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The application relates to a method and a device for positioning a part, wherein the method comprises the following steps: under the condition of detecting a first user, respectively acquiring a first image and a second image of the first user through a first image acquisition device and a second image acquisition device, wherein the first user represents a user needing model optimization at present, the first image and the second image are different in view angle relative to the first user, and the difference value between the acquisition moments of the first image and the second image of the first user is smaller than or equal to a first threshold value; optimizing a model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user; and positioning a target part in the first user based on the personalized model of the first user. In the embodiment of the application, the target part of the user can be accurately positioned without configuring the depth measuring device.
Description
The application relates to the technical field of intelligent vehicles, in particular to a position positioning method and device.
In the cabin, there are many related applications based on driver head eye positioning, depending on the multi-modal nature of the cabin. Applications such as gaze tracking, augmented reality Head up display (Augmented Reality-Head up display, AR-HUD) anti-dizziness require computation based on accurate driver Head-eye positioning results, and therefore the accuracy of driver Head-eye positioning determines the actual accuracy of many applications and is a very important cabin requirement.
With the development of deep learning, methods for accurately estimating eye positions of a human head according to Red Green Blue (RGB) human head images and simultaneous deep human head images have appeared. The method is generally based on RGB human head images and depth human head images at the same moment, a personalized human head model is fitted, and then the world coordinate system of the personalized human head model and the coordinate system of the RGB human head images are utilized to convert the human head eye positioning problem into an N-point perspective problem in the field of computer vision for solving. The solving method is accurate and is less influenced by shielding factors.
However, the above method requires the configuration of a depth measuring device such as a depth camera, which is capable of performing a more accurate depth measurement, but is not generally configured in a car seat due to the cost, size, energy consumption, and the like. How to accurately position eyes of a human head without configuring a depth measuring device is a current problem to be solved urgently.
Disclosure of Invention
In view of the above, a method and apparatus for locating a target portion of a user are provided, which can accurately locate the target portion without providing a depth measuring device.
In a first aspect, embodiments of the present application provide a method for positioning a location, the method including: under the condition of detecting a first user, respectively acquiring a first image and a second image of the first user through a first image acquisition device and a second image acquisition device, wherein the first user represents a user needing model optimization at present, the first image and the second image are different in view angle relative to the first user, and the difference value between the acquisition moments of the first image and the second image of the first user is smaller than or equal to a first threshold value; optimizing a model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user; and positioning a target part in the first user based on the personalized model of the first user.
In the embodiment of the application, the personalized human head model of the first user is obtained based on the images with different visual angles and similar acquisition moments, so that the target part in the first user is positioned under the condition that the depth measuring device is not configured, the cost is saved, and the energy consumption is reduced.
In a first possible implementation manner of the method according to the first aspect, the method further includes: under the condition that a user is detected, acquiring identity information of the user, wherein the identity information is used for identifying a unique user; searching a personalized model corresponding to the identity information; and under the condition that the personalized model corresponding to the identity information is not found, determining the detected user as the first user, and determining the universal model of each user as the model to be optimized.
In a second possible implementation manner of the method according to the first possible implementation manner of the first aspect, the method further includes: under the condition that a personalized model corresponding to the identity information is found, determining that the detected user is a second user, wherein the second user represents a user which does not need model optimization currently; and positioning a target part in the second user based on the personalized model corresponding to the identity information.
Therefore, before the target part is positioned, the process of model optimization is omitted, the waiting time of target positioning is reduced, the efficiency is improved, the user experience is improved, and meanwhile, through repeated use of the model, the operation resource is saved, and the energy consumption is reduced.
In a third possible implementation manner of the method according to the first possible implementation manner of the first aspect, the method further includes: under the condition that a personalized model corresponding to the identity information is found and a model optimization instruction is received, determining the detected user as the first user, and determining the personalized model corresponding to the identity information as the model to be optimized; under the condition that a personalized model corresponding to the identity information is found and a model optimization instruction is not received, determining that the detected user is a second user, wherein the second user represents a user which does not need model optimization currently; and positioning a target part in the second user based on the personalized model corresponding to the identity information.
Therefore, the existing personalized model of the first user is determined to be the model to be optimized of the first user for optimization, so that the iteration times can be reduced, the demands on computing resources and energy consumption can be reduced, the optimization effect can be improved, or the model optimization process can be omitted before positioning, the waiting time can be reduced, the computing resources can be saved, the positioning efficiency can be improved, and the user experience can be improved.
In a fourth possible implementation manner of the method according to the second possible implementation manner or the third possible implementation manner of the first aspect, after locating the target location in the second user based on the personalized model corresponding to the identity information, the method further includes: acquiring a first image and a second image of the second user through the first image acquisition device and the second image acquisition device respectively; optimizing the personalized model corresponding to the identity information based on the first image and the second image of the second user to obtain an optimized model of the second user; updating the stored personalized model corresponding to the identity information into an optimized model of the second user.
Therefore, after the target part in the second user is positioned, the current personalized model (namely the personalized model corresponding to the identity information) of the second user can be optimized, so that the accuracy is improved, on one hand, the local positioning time is saved, and on the other hand, the accuracy of next positioning is improved.
In a fifth possible implementation manner of the method according to any one of the first possible implementation manner to the fourth possible implementation manner of the first aspect, after the obtaining the personalized model of the first user, the method further includes: and storing the identity information and the personalized model of the first user in a correlation way.
Therefore, when the same user is encountered next time, the optimization process can be omitted, or the model adjustment times in the optimization process can be reduced, so that resources and time are effectively saved, the efficiency is improved, and the user experience is improved.
In a sixth possible implementation manner of the method according to the first aspect or any one of the possible implementation manners of the first aspect, the optimizing the model to be optimized of the first user based on the first image and the second image of the first user, and obtaining the personalized model of the first user includes:
determining a first coordinate of a first key point in a coordinate system of the first image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a first image of the first user, wherein the first key point is used for representing the key point of the first user in the model to be optimized;
determining a first coordinate of the first key point in a coordinate system of the second image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a second image of the first user;
Converting a first coordinate of the first key point in the coordinate system of the first image acquisition device to a second coordinate of the first key point in the coordinate system of the second image acquisition device based on a space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, so as to obtain a second coordinate of the first key point in the coordinate system of the second image acquisition device;
and optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device to obtain a personalized model of the first user.
According to a sixth possible implementation manner of the first aspect, in a seventh possible implementation manner of the method, optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image capturing device, to obtain a personalized model of the first user, includes:
determining a loss function according to the difference between a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device;
And optimizing the scale parameters and the shape parameters of the model to be optimized so that the loss function meets a preset threshold.
In a sixth possible implementation manner or a seventh possible implementation manner of the first aspect, in an eighth possible implementation manner of the method, the method further includes:
and determining the space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device according to the coordinates of the standard point in the world coordinate system in the coordinate system of the first image acquisition device and the coordinates in the coordinate system of the second image acquisition device.
In a ninth possible implementation manner of the method according to any one of the fifth possible implementation manner to the eighth possible implementation manner of the first aspect, the positioning the target location in the first user based on the personalized model of the first user includes: acquiring a third image of the first user by a third image acquisition device; and determining the coordinates of a second key point in the coordinate system of the third image acquisition device according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image of the first user, wherein the second key point is used for representing a point corresponding to the target part of the first user in the personalized model.
Thus, after personalized head modeling is completed, three-dimensional position information of the target part can be obtained by only using the personalized head model and an image shot by a single image acquisition device and using an N-point perspective problem solving algorithm, and under the condition of ensuring the positioning accuracy of the target part, the requirements on computing resources and energy consumption are effectively reduced. In addition, the double-camera data is not required to be continuously provided, and the overall complexity and the energy consumption of the model are further reduced.
In a second aspect, embodiments of the present application provide a site locating apparatus, the apparatus comprising:
the first acquisition module is used for respectively acquiring a first image and a second image of a first user through the first image acquisition device and the second image acquisition device under the condition of detecting the first user, wherein the first user represents a user needing model optimization at present, the first image and the second image are different in view angle relative to the first user, and the difference value between the acquisition moments of the first image of the first user and the second image of the first user is smaller than or equal to a first threshold value;
the first optimization module is used for optimizing the model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user;
And the first positioning module is used for positioning the target part in the first user based on the personalized model of the first user.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the identity information of the user under the condition that the user is detected, wherein the identity information is used for identifying the unique user;
the searching module is used for searching the personalized model corresponding to the identity information;
the first determining module is used for determining that the detected user is the first user under the condition that the personalized model corresponding to the identity information is not found, and determining the universal model of each user as the model to be optimized.
In one possible implementation, the apparatus further includes:
the second determining module is used for determining that the detected user is a second user under the condition that the personalized model corresponding to the identity information is found, and the second user represents the user who does not need model optimization currently;
and the second positioning module is used for positioning the target part in the second user based on the personalized model corresponding to the identity information.
In one possible implementation, the apparatus further includes:
the third determining module is used for determining that the detected user is the first user and determining the personalized model corresponding to the identity information as the model to be optimized under the condition that the personalized model corresponding to the identity information is found and a model optimization instruction is received;
a fourth determining module, configured to determine, when a personalized model corresponding to the identity information is found and a model optimization instruction is not received, that the detected user is a second user, where the second user represents a user that does not currently need model optimization;
and the third positioning module is used for positioning the target part in the second user based on the personalized model corresponding to the identity information.
In a possible implementation manner, after the target location in the second user is located based on the personalized model corresponding to the identity information, the apparatus further includes:
the third acquisition module is used for acquiring a first image and a second image of the second user through the first image acquisition device and the second image acquisition device respectively;
The second optimization module is used for optimizing the personalized model corresponding to the identity information based on the first image and the second image of the second user to obtain an optimized model of the second user;
and the updating module is used for updating the stored personalized model corresponding to the identity information into the optimized model of the second user.
In one possible implementation, the apparatus further includes:
and the storage module is used for storing the identity information and the personalized model of the first user in a correlated mode.
In one possible implementation manner, the first optimization module is further configured to:
determining a first coordinate of a first key point in a coordinate system of the first image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a first image of the first user, wherein the first key point is used for representing the key point of the first user in the model to be optimized;
determining a first coordinate of the first key point in a coordinate system of the second image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a second image of the first user;
Converting a first coordinate of the first key point in the coordinate system of the first image acquisition device to a second coordinate of the first key point in the coordinate system of the second image acquisition device based on a space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, so as to obtain a second coordinate of the first key point in the coordinate system of the second image acquisition device;
and optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device to obtain a personalized model of the first user.
In a possible implementation manner, according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device, the model to be optimized is optimized, so as to obtain a personalized model of the first user, which includes:
determining a loss function according to the difference between a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device;
and optimizing the scale parameters and the shape parameters of the model to be optimized so that the loss function meets a preset threshold.
In one possible implementation, the apparatus further includes:
and a fifth determining module, configured to determine a spatial conversion relationship between the coordinate system of the first image capturing device and the coordinate system of the second image capturing device according to the coordinates of the calibration point in the world coordinate system in the coordinate system of the first image capturing device and the coordinates in the coordinate system of the second image capturing device.
In one possible implementation, the first positioning module is further configured to:
acquiring a third image of the first user by a third image acquisition device;
and determining the coordinates of a second key point in the coordinate system of the third image acquisition device according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image of the first user, wherein the second key point is used for representing a point corresponding to the target part of the first user in the personalized model.
In a third aspect, embodiments of the present application provide an electronic device, which may perform the above-mentioned first aspect or one or several of the possible implementations of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the location positioning method of the first aspect or one or more of the possible implementations of the first aspect.
In a fifth aspect, embodiments of the present application provide a position locating apparatus comprising a memory storing computer program instructions that are executable by a processor to perform operations of the first aspect or any implementation of the first aspect above.
The part positioning device described in the fifth aspect may be applied to terminal devices in the form of vehicles, robots, smart homes, and the like. When applied to a vehicle, the location positioning device may be the vehicle itself, or a component of the vehicle, such as a central gateway, a vehicle-end-of-vehicle communication terminal (T-BOX), a Human-machine interaction controller (Human-Machine Interaction, HMI), a mobile data center (Mobile Data Controller, MDC), an advanced driving assistance system (Advanced Driving Assistant System, ADAS) or an electronic control unit (Electronic Control Unit, ECU), or a sub-device within the above-mentioned component, or a separate device within the vehicle other than the above-mentioned component.
In a sixth aspect, embodiments of the present application provide a model training system that includes a first image acquisition device, a second image acquisition device, and a processor.
The first image acquisition device is used for acquiring a first image of a first user, and the first user represents a user needing model optimization currently; the second image acquisition device is used for acquiring a second image of the first user, the first image and the second image are different in view angle relative to the first user, and the difference value of the acquisition time of the first image and the second image is in a first threshold range; the processor is used for optimizing the model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user.
These and other aspects of the application will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features and aspects of the present application and together with the description, serve to explain the principles of the present application.
FIGS. 1a, 1b and 1c illustrate exemplary architectural diagrams of a position location system provided by embodiments of the present application, respectively;
FIG. 1d shows a schematic view of the CMS camera field of view;
FIG. 2 shows a flowchart of a method for location positioning provided by an embodiment of the present application;
FIG. 3 illustrates an exemplary schematic diagram of a personalized model;
FIG. 4 shows a schematic diagram of a coordinate system involved in an embodiment of the present application;
FIG. 5 shows an exemplary schematic diagram of a coordinate system of a second image acquisition device in an embodiment of the present application;
FIG. 6 shows a schematic structural diagram of a position locating device according to an embodiment of the present application;
fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present application.
Various exemplary embodiments, features and aspects of the present application will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers indicate identical or functionally similar elements. Although various aspects of the embodiments are illustrated in the accompanying drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
In addition, numerous specific details are set forth in the following detailed description in order to provide a better understanding of the present application. It will be understood by those skilled in the art that the present application may be practiced without some of these specific details. In some instances, methods, means, elements, and circuits have not been described in detail as not to unnecessarily obscure the present application.
In order to solve the technical problems, the application provides a position positioning method which can accurately position target positions such as eyes or ears of a human head under the condition that a depth measuring device is not configured. The method can be applied to terminal equipment in the forms of vehicles, driving simulation cabins and the like, so that when a user performs actual driving or simulated driving, target parts such as eyes or ears of the user are accurately positioned, and thus, the applications such as sight tracking, augmented reality Head-up display (AR-HUD) anti-dazzle dizziness, man-machine interaction, eye box adjustment and the like are realized.
Fig. 1a, 1b and 1c respectively show exemplary architectural diagrams of a part positioning system provided in an embodiment of the present application. Referring to fig. 1a, fig. 1b, and fig. 1c, a scenario in which the position locating system provided in the embodiment of the present application is applied to a vehicle will be described as an example. Where the position locating system shown in fig. 1a is applied to a vehicle 10, the position locating system may include a first image capturing device 11, a second image capturing device 12 and a processor 13.
Wherein the first image capturing device 11 and the second image capturing device 12 may be used to capture a first image and a second image of a user. The first image and the second image may be images obtained by direct shooting, or may be image frames in a video stream. In the embodiment of the present application, the first image capturing device 11 and the second image capturing device 12 may be a device having a function of capturing a photograph or recording a video, such as a video camera, a video recorder, or the like. The first image pickup device 11 and the second image pickup device 12 may pick up RGB images. The first image pickup device 11 and the second image pickup device 12 may be mounted to the vehicle 10 as needed. Specifically, the first image capturing device 11 and the second image capturing device 12 have parallax with respect to the user during installation, so that the acquired first image and second image have different viewing angles with respect to the user, and a basis is provided for optimization of the subsequent three-dimensional model.
In the embodiment of the present application, the installation positions of the first image capturing device 11 and the second image capturing device 12 may be specifically shown in fig. 1b and 1 c. In one example, as shown in fig. 1b, the first image acquisition device 11 is arranged on the left a-pillar of the vehicle cabin and the second image acquisition device 12 is arranged on the right a-pillar of the vehicle cabin. In yet another example, as shown in fig. 1c, the first image acquisition device 11 is arranged on the left side a pillar of the vehicle cabin and the second image acquisition device 12 is arranged on the side of the cabin rearview mirror facing the user. In other examples, the first image capturing device 11 and the second image capturing device 12 may be mounted on a steering wheel, a center console area, or above a display screen, or the like. The first image capturing device 11 and the second image capturing device are mainly used for capturing head images of a user in a vehicle cabin. That is, the first image and the second image include the head image of the user.
Considering that two RGB cameras are typically deployed in the cabin of an actual intelligent vehicle, a driver monitoring system (Driver Monitoring System, DMS) camera and a cabin monitoring system (Cabin Monitoring System, CMS) camera, respectively. The DMS camera is usually located on the left a-pillar or steering column of the cockpit driving position, the resolution is higher than 640 x 480 (unit: pixel), the horizontal view is greater than 40 degrees (°), and the vertical view is greater than 20 °. Fig. 1d shows a schematic view of CMS camera field of view. As shown in fig. 1d, the horizontal angle of view of the CMS is shown as α and the vertical angle of view is shown as β. CMS cameras are typically located at the inside-cabin rearview mirror positions with a resolution higher than 1920 x 1080 (units: pixels), a horizontal field of view greater than 100 °, and a vertical field of view greater than 60 °. The view range of the DMS camera may refer to the CMS camera, and will not be described herein. Because the included angle between the DMS camera and the CMS camera is larger than 30 degrees and smaller than 120 degrees under the general condition, the two can well capture the head image of the user positioned at the driving position. Therefore, in general, the DMS camera and the CMS camera may perform the acquisition of the first image and the second image as the first image acquisition device 11 and the second image acquisition device 12 in the embodiment of the present application, respectively.
After the first image capturing device 11 and the second image capturing device capture the first image and the second image of the user, respectively, the first image and the second image of the user may be sent to the processor 13 in a wired or wireless (for example, WIFI, bluetooth, etc. communication mode).
The processor 13 may optimize the model to be optimized for the user based on the first image and the second image of the user, thereby obtaining a personalized model for the user. The processor may then locate the target site of the user based on the personalized model of the user. The optimization process of the model to be optimized and the positioning process of the target part will be described in detail later, and will not be described again here.
The processor 13 may be an electronic device, specifically, may be a processor of a vehicle-mounted processing device such as a vehicle machine or a vehicle-mounted computer, or may be a conventional chip processor such as a central processing unit (Central Processing Unit, CPU) or a microprocessor (Micro Control Unit, MCU), or may be terminal hardware such as a mobile phone or a tablet.
Through the above-mentioned position positioning system, can carry out the optimization of model based on user's first image and second image to realize the accurate location to user's target position, need not increase extra depth measurement device for the cabin territory, can not put forward extra demands to the price of image acquisition device, size and power consumption etc..
Based on the application scenarios shown in fig. 1a to 1c, fig. 2 shows a flowchart of a method for location positioning provided in an embodiment of the present application, where the method may be performed by a vehicle, an on-board device, or a processor. The following describes a location positioning method provided in the embodiment of the present application, taking a processor as an example. As shown in fig. 2, the method includes:
step S11, under the condition of detecting a first user, acquiring a first image and a second image of the first user through a first image acquisition device and a second image acquisition device respectively.
The first user represents a user who needs to perform model optimization currently. In one example, when a user sits in the cockpit, the user may be considered to be the user (i.e., the first user) who is currently in need of model optimization. In this case model optimization is required for each user entering the cockpit. In yet another example, when a user sits in the cockpit, it may be searched for whether there is a personalized model for the user, and if not, it may be considered that the user is the user (i.e., the first user) who currently needs model optimization; if so, the user is considered to be not currently required to perform model optimization. In this case model optimization is only performed for new users. In another example, when a user sits in the cockpit, it may be looked up if there is a personalized model for the user; if not, the user can be considered as the user needing model optimization at present; if there is, however, that the storage time of the personalized model is greater than a certain threshold (which may be set as desired, for example, greater than 30 days, or greater than 1 year, etc.), then the user may be considered to be the user (i.e., the first user) who is currently in need of model optimization. In this case, model optimization is required not only for new users but also for some old users.
In one possible implementation, the personalized model for each user may be stored locally on the vehicle, in which case, when a user sits in the cockpit, it may be possible to find locally whether there is a personalized model for the user.
In yet another possible implementation, the personalized model of each user may be stored in the cloud, in which case, when one user sits in the cockpit, it may be searched in the cloud whether there is a personalized model of the user.
In another possible implementation manner, the personalized model of each user may be stored in both the vehicle local and cloud, in which case, when one user sits in the cockpit, it may be first searched locally whether there is a personalized model of the user, and if not, then the personalized model of the user is searched in the cloud. Of course, the personalized model may be searched at the cloud end first, and then the personalized model may be searched at the local area, or the personalized model may be searched at both the cloud end and the local area. If the local and cloud sides find the corresponding personalized models, the personalized models can be selected according to storage time or generation time and the like.
The first image may represent an image acquired by the first image acquisition means. The second image may represent an image acquired by the second image acquisition means. The first image and the second image need to include a head image of the user. Specifically, the first image and the second image of the first user need to include a head image of the first user.
The difference between the acquisition time of the first image and the acquisition time of the second image is smaller than or equal to a first threshold value, so that the situation that the model is inaccurate due to the movement of the head of a user can be reduced. The first threshold may be set as needed, for example, the first threshold may be 0 seconds, 10 milliseconds, or 0.5 seconds. It can be appreciated that the closer the acquisition moments of the first image and the second image are, the better the effect of the subsequent model optimization.
The first image and the second image differ in view angle relative to the first user. In one example, an angle between the first image capture device and the second image capture device is greater than a second threshold. In this way, it is ensured that the first image and the second image differ in view angle with respect to the first user. The second threshold may be set as required, for example, the second threshold may be 30 °, 50 °, 100 °, or the like. The first image capturing device and the second image capturing device may refer to fig. 1a to 1c, and are not described herein.
It can be understood that when the quality of the images acquired by the first image acquisition device and the second image acquisition device is poor, for example, when the images cannot be processed later due to the image blurring caused by the movement of the head of the driver or the passenger, or the image blurring caused by insufficient light, the images with poor quality can be filtered, and the images with good quality can be reserved as the first image and the second image.
And step S12, optimizing the model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user.
The model to be optimized can be used for representing the model of the human head which needs to be optimized. The model to be optimized of the first user may be a general model (or called an average model) of each user, or may be an existing personalized model of the first user. After the to-be-optimized model of the first user is optimized, the size of the to-be-optimized model may change, and the eyes, ears and other parts of the to-be-optimized model may change, so that the to-be-optimized model is continuously close to the actual head of the first user, and finally the personalized model of the first user is obtained.
In one possible implementation, the method further includes: under the condition that a user is detected, acquiring identity information of the user; searching a personalized model corresponding to the identity information; and under the condition that the personalized model corresponding to the identity information is not found, determining the detected user as the first user, and determining the universal model of each user as the model to be optimized.
Wherein the identity information may be used to identify a unique user. In one example, the identity information includes, but is not limited to, face information, fingerprint information, a user name or an identification card number, and the like. When the identity information is face information (specifically, face image or face image features, etc.), no perception of the user can be realized, so that user experience is improved. Taking a vehicle as an example, whether a driver (i.e., a user) is present in the cabin may be detected by a seat pressure sensor, a CMS camera, a DMS camera, or the like, and if the driver is detected, face information of the driver may be acquired by the CMS camera or the DMS camera.
After the identity information of the user is acquired, firstly, searching a personalized model corresponding to the identity information in a local and/or cloud, and under the condition that the personalized model corresponding to the identity information is not found, indicating that the detected user is a new user, and carrying out model optimization on the user before positioning the target of the user, wherein the new user is the first user. Since the first user is a new user, and the personal model of the first user is not available before, the general model of each user can be determined as the model to be optimized of the first user for optimization.
In one possible implementation, the method further includes: under the condition that a personalized model corresponding to the identity information is found, determining the detected user as a second user; and positioning a target part in the second user based on the personalized model corresponding to the identity information.
Wherein the second user represents a user that does not currently need model optimization. Under the condition that the personalized model corresponding to the identity information is found, the detected user is indicated to be an old user, and the existing personalized model of the user (namely, the personalized model corresponding to the identity information) can be used for positioning. Therefore, before the target part is positioned, the process of model optimization is omitted, the waiting time of target positioning is reduced, the efficiency is improved, the user experience is improved, and meanwhile, through repeated use of the model, the operation resource is saved, and the energy consumption is reduced. The method for positioning the target portion in the second user may refer to the method for positioning the target portion in the first user, which is not described in detail in the embodiment of the present application.
It is contemplated that there may still be a large deviation between the existing personalized model of some old users and the actual head of the old users. In the embodiment of the application, the old users can be divided into two types, and the existing personalized model is required to be optimized firstly and then positioned subsequently so as to improve the accuracy; in the other type, the existing personalized model can be directly used for positioning, so that time is saved, and efficiency is improved.
In one example, when a personalized model corresponding to the identity information is found and a model optimization instruction is received, determining the detected user as the first user, and determining the personalized model corresponding to the identity information as the model to be optimized; under the condition that a personalized model corresponding to the identity information is found and a model optimization instruction is not received, determining the detected user as a second user; and positioning a target part in the second user based on the personalized model corresponding to the identity information.
Wherein the model optimization instructions may be used to instruct model optimization. In one example, the model optimization instructions may be instructions that are generated by default when a user is detected. Thus, no matter whether the detected user is a new user or an old user, model optimization is performed first, so that the accuracy of subsequent positioning is improved. In yet another example, the model optimization instructions may be instructions generated when a storage time of the personalized model corresponding to the identity information is greater than a third threshold. The third threshold may be set as needed, for example, 30 days, 6 months, or 1 year. Thus, when a personalized model exists for a long time, model optimization is performed first, and positioning is performed, so that efficiency and accuracy can be balanced.
Under the condition that the personalized model corresponding to the identity information is found, the detected user is indicated to be an old user, if a model optimization instruction is received, the fact that the existing personalized model of the user is possibly deviated from the actual human head of the user is indicated to be larger, in order to improve accuracy, model optimization needs to be carried out on the user before the target of the user is located, and the old user is the first user. Because the first user is an old user, and the personalized model of the first user exists before, the existing personalized model of the first user (namely, the personalized model corresponding to the identity information) can be determined to be the model to be optimized of the first user for optimization. Compared with the method that the general model of each user is determined to be the model to be optimized of the first user to optimize, the method has the advantages that the number of iterations can be reduced, the demand on computing resources and energy consumption can be reduced, and the optimization effect can be improved by determining the existing personalized model of the first user to be the model to be optimized of the first user to optimize.
Under the condition that the personalized model corresponding to the identity information is found, if a model optimization instruction is not received, the fact that the existing personalized model of the detected user is available is indicated, model optimization is not needed, and the detected user is the second user. Therefore, the model optimization process can be omitted before positioning, the waiting time is reduced, the computing resources are saved, the positioning efficiency is improved, and the user experience is improved.
Since the first image and the second image of the first user are the head images of the first user acquired from two viewpoints, and the acquisition timings of the first image and the second image of the first user are close, there is a point M in the first image and a point N in the second image corresponding to the same point on the actual head of the first user. Therefore, in the embodiment of the application, the model to be optimized of the first user can be optimized based on the first image and the second image of the first user, so that the model to be optimized of the first user is continuously attached to the actual head of the first user, and the model to be optimized can be used as a personalized model for positioning a target part in the first user when the model to be optimized is attached to the actual head to a certain extent. The optimization process will be described in detail later, and will not be described in detail here.
FIG. 3 illustrates an exemplary schematic diagram of a personalized model. As shown in fig. 3, the first image and the second image are images of two viewing angles of the head of the first user at the same time, and the personalized model of the first user is obtained after the model to be optimized of the first user is optimized based on the first image and the second image. Wherein the differences between the personalized model of the first user and the model to be optimized of the first user include, but are not limited to: human head size, eye socket depth, eye size, eye distance size, nose bridge height, nostril size, nose wing width, mouth size, mouth angle, ear size and ear position, etc.
In one possible implementation manner, after obtaining the personalized model of the first user, the method further includes: and storing the identity information and the personalized model of the first user in a local and/or cloud association mode.
Therefore, when the same user is encountered next time, the optimization process can be omitted, or the model adjustment times in the optimization process can be reduced, so that resources and time are effectively saved, the efficiency is improved, and the user experience is improved.
And step S13, positioning a target part in the first user based on the personalized model of the first user.
The target site includes, but is not limited to, the eyes, ears, nose or mouth. And positioning the target part in the first user, namely determining the position of the target part of the first user in a camera coordinate system. The positioning process will be described in detail later, and will not be described in detail here.
In the embodiment of the application, the personalized human head model of the first user is obtained based on the images with different visual angles and similar acquisition moments, so that the target part in the first user is positioned under the condition that the depth measuring device is not configured, the cost is saved, and the energy consumption is reduced.
In one possible implementation, multiple image acquisition devices may be deployed, combined two by two between the multiple image acquisition devices. For each combination, two image acquisition devices in the combination are adopted to acquire a first image and a second image respectively for model optimization.
In one possible implementation, multiple sets of first and second images may be acquired. The plurality of groups of first images and the plurality of groups of second images can be images from the same group of image acquisition devices at different acquisition moments, can be images from the different groups of image acquisition devices at the same moment, and can also be images from the different groups of image acquisition devices at different moments.
For example, assume that there are an image pickup device 1, an image pickup device 2, and an image pickup device 3. Image acquisition device 1 acquires image 11 at time 1, acquires image 12 at time 2, image acquisition device 2 acquires image 21 at time 1, acquires image 22 at time 2, and image acquisition device 3 acquires image 31 at time 1 and acquires image 32 at time 2. In one example, image 11 and image 21 are as a group and image 12 and image 22 are as a group. In one example, images 11 and 21 can be grouped together, images 11 and 31 can be grouped together, and images 21 and 31 can be grouped together. In one example, image 11 and image 21 are as a group and image 22 and image 32 are as a group.
In the embodiment of the application, model optimization is performed based on a plurality of groups of first images and second images, so that the fitting degree of the personalized model and the real human head can be effectively improved, and the positioning accuracy is effectively improved.
The detailed process of optimizing the model to be optimized of the first user based on the first image and the second image of the first user to obtain the personalized model of the first user is described below.
For ease of understanding, the coordinate system and the conversion relationship between coordinate systems referred to in the embodiments of the present application will be described first.
The embodiment of the application relates to three coordinate systems, namely a world coordinate system, a camera coordinate system and an image coordinate system. Fig. 4 shows a schematic diagram of a coordinate system involved in an embodiment of the present application. As shown in FIG. 4, the world coordinate system employs O w -UVW representation, camera coordinate system using O c -XYZ representation, the image coordinate system being represented by O-xy. The unit of the world coordinate system is meter (m), the unit of the camera coordinate system is meter (m), and the unit of the image coordinate system is pixel (pixel).
As shown in fig. 4, a point P in the world coordinate system is a real point in life, and the coordinates in the world coordinate system are (U, V, W); the imaging point of the P point in the image coordinate system is P, and the coordinates thereof in the image coordinate system are (x, y). The focal length of the camera of the image acquisition device is f, which is equal to O and O c The distance between, i.e. f= |o-O c ‖。
The world coordinate system can obtain the camera coordinate system through translation and rotation, and the process belongs to rigid transformation, namely the object cannot be deformed. The conversion relation between the world coordinate system and the camera coordinate system is shown in a formula I.
The [ R, T ] is an external matrix of the image acquisition device, including a rotation matrix R and a translation vector T, the coordinates of a P point in a world coordinate system are (U, V, W), the coordinates of a P point in a camera coordinate system (namely the coordinate system of the image acquisition device) are (X, Y, Z), and the conversion relationship between the two is shown in a formula I.
The three-dimensional to two-dimensional coordinate system from the camera coordinate system to the image coordinate system belongs to perspective projection relation. The conversion relation between the camera coordinate system and the image coordinate system is shown in a formula II.
Wherein (f) x ,f y ,c x ,c y ) Is an internal reference of a camera of the image acquisition device,f x ,f y ,c x ,c y is in pixels. The coordinates of the P point in the camera coordinate system (namely the coordinate system of the image acquisition device) are (X, Y, Z), the imaging point of the P point in the image coordinate system is P, the coordinates of the P point in the image coordinate system are (X, Y), and the conversion relationship between the two is shown in a formula II.
The conversion relation between the image coordinate system and the world coordinate system can be obtained by combining the formula I and the formula II, and the conversion relation is specifically shown as a formula III.
Based on the formula III, the camera internal parameters of the image acquisition device can be solved through an N-point perspective problem solving algorithm (SolvePnP algorithm) by using a calibration plate with known world coordinate system coordinates and the position of the calibration plate in an image coordinate system, and the related technology can be referred to, so that the embodiment of the invention is not described in detail.
The embodiment of the application relates to a first image acquisition device and a second image acquisition device, and correspondingly solves the camera internal parameters of the first image acquisition device and the camera internal parameters of the second image acquisition device based on a formula III.
For the same point in the world coordinate system at the same time, it is assumed that the coordinates in the camera coordinate system of the first image pickup device are (X 1 ,Y 1 ,Z 1 ) The coordinates in the camera coordinate system of the second image pickup device are (X 2 ,Y 2 ,Z 2 ) The relationship between the two coordinates is:
wherein,is a relative translational rotation matrix of the first image acquisition device and the second image acquisition device, namely a relative external parameter between the first image acquisition device and the second image acquisition device.
When there are a series of coordinate sets of the calibration points in the two image coordinate systems, the relative rotation matrix translation matrix can be estimated by the Procrustes algorithm and the nearest point search method (Iterative Closest Point, ICP) algorithm, and the related technology can be referred to herein, which is not described in detail in the embodiments of the present application.
Thus, the camera intrinsic parameters of the first image acquisition device, the camera intrinsic parameters of the second image acquisition device, and the relative extrinsic parameters between the first image acquisition device and the second image acquisition device are obtained.
In one possible implementation, step S12 may include: determining a first coordinate of a first key point in a coordinate system of the first image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a first image of the first user; determining a first coordinate of the first key point in a coordinate system of the second image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a second image of the first user; converting a first coordinate of the first key point in the coordinate system of the first image acquisition device to a second coordinate of the first key point in the coordinate system of the second image acquisition device based on a space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, so as to obtain a second coordinate of the first key point in the coordinate system of the second image acquisition device; and optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device to obtain a personalized model of the first user.
The first key point is used for representing the key point of the first user in the model to be optimized. For example, the first key points may be 68 key points used in face detection in the related art, and the first key points are mainly distributed in the facial contour and the five sense organs, and the embodiment of the present application does not limit the first key points.
According to the coordinates of the first key point in the world coordinate system, the coordinates of the first key point in the coordinate system of the first image of the first user (namely the image coordinate system of the first image of the first user), and the formula III, solving the N-point perspective problem, and obtaining the camera internal parameters of the first image acquisition device; according to the coordinates of the first key point in the coordinate system of the first image of the first user, the camera internal reference of the first image acquisition device and the formula II, the first coordinates of the first key point in the coordinate system of the first image acquisition device (namely, the camera coordinate system of the first image acquisition device) can be obtained.
According to the coordinates of the first key point in the world coordinate system, the coordinates of the first key point in the coordinate system of the second image of the first user (namely, the image coordinate system of the second image of the first user), and the formula III, solving the N-point perspective problem, and obtaining the camera internal parameters of the second image acquisition device; according to the middle base of the first key point in the coordinate system of the second image of the first user and the camera internal reference of the first image acquisition device and the formula II, the first coordinate of the first key point in the coordinate system of the first two-image acquisition device (namely, the camera coordinate system of the second image acquisition device) can be obtained.
Wherein the spatial conversion relationship between the coordinate system of the first image capturing device and the coordinate system of the second image capturing device may be determined by: according to the coordinates of the calibration point in the world coordinate system in the coordinate system of the first image acquisition device and the coordinates in the coordinate system of the second image acquisition device, the spatial conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, namely the relative external parameters between the first image acquisition device and the second image acquisition device obtained based on the formula four, is determined, and the process is described as above and is not repeated here. Based on the spatial conversion relationship between the coordinate system of the first image capturing device and the coordinate system of the second image capturing device, the first coordinate of the first key point in the coordinate system of the first image capturing device can be converted into the second coordinate of the first key point in the coordinate system of the second image capturing device.
Because the first coordinate of the first key point in the coordinate system of the second image acquisition device and the second coordinate of the first key point in the coordinate system of the second image acquisition device belong to the same coordinate system and correspond to the same actual point, the two coordinates should be coincident or as close as possible, and based on the two coordinates, the model to be optimized can be optimized to obtain the personalized model of the first user.
In a possible implementation manner, the optimizing the model to be optimized according to the first coordinate and the second coordinate of the first key point in the coordinate system of the second image acquisition device to obtain a personalized model of the first user includes: determining a loss function according to the difference between a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device; and optimizing the scale parameters and the shape parameters of the model to be optimized so that the loss function meets a preset threshold.
Fig. 5 shows an exemplary schematic diagram of a coordinate system of the second image capturing device in the embodiment of the present application. As shown in fig. 5, in the left image, the coordinates of the deep color point correspond to the first coordinates of the first key point in the coordinate system of the first image capturing device, and the coordinates of the shallow color point correspond to the first coordinates of the first key point in the coordinate system of the second image capturing device; in the right image, the coordinates of the deep color point correspond to the second coordinates of the first key point in the coordinate system of the second image capturing device (obtained by spatially transforming the first coordinates of the first key point in the coordinate system of the first image capturing device), and the coordinates of the shallow color point correspond to the first coordinates of the first key point in the coordinate system of the second image capturing device. As shown in the right image of fig. 5, the first coordinate and the second coordinate of the first key point in the coordinate system of the second image acquisition device may not coincide, which means that a certain gap exists between the current model to be optimized and the actual head. At this time, the loss function may be determined according to a difference between a first coordinate and a second coordinate of the same first key point in a coordinate system of the second image capturing device; and then optimizing the scale parameter and the shape parameter until the loss function meets a preset threshold. The scale parameters are used for adjusting the size of the model, and the shape parameters are used for adjusting the size, position, height and the like of the five sense organs. It will be appreciated that the dark and light color points of the right image of fig. 5 overlap more when the loss function meets the preset threshold. The preset threshold can be set according to the needs, and the smaller the preset threshold is, the better the fitting effect of the model is, and the larger the calculated amount is; the larger the preset threshold value is, the worse the fitting effect of the model is, and the smaller the calculated amount is. In the embodiment of the present application, the value of the preset threshold is not limited.
The model optimization process is introduced, and the personalized model of the first user is obtained. The process of locating the target site in the first user based on the personalized model of the first user is described below.
In one possible implementation, step S13 may include: acquiring a third image of the first user by a third image acquisition device; and determining the coordinates of a second key point in the coordinate system of the third image acquisition device according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image of the first user, wherein the second key point is used for representing a point corresponding to the target part of the first user in the personalized model.
The third image may represent an image acquired by the third image acquisition device, where the third image includes a target portion of the user. For example, the third image of the first user includes the target portion of the first user. The third image capturing device may refer to the first image capturing device and the second image capturing device. The third image pickup device may be one of the first image pickup device and the second image pickup device, and the third image pickup device may be any one of the image pickup devices other than the first image pickup device and the second image pickup device.
The second key point is used for representing a point corresponding to the target position of the first user in the personalized model, and specifically may be a point corresponding to the eye of the first user in the personalized model or a point corresponding to the ear of the first user in the personalized model. The coordinates of the second key point in the coordinate system of the third image of the first user may be determined according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image, and this process may refer to the process of determining the coordinates of the first key point in the coordinate system of the first image acquisition device, which will not be described again here. Thus, the coordinates of the target portion of the first user in the coordinate system of the third image acquisition device are obtained. In case the second key point corresponds to the eye of the first user, the coordinates of the eye of the first user in the coordinate system of the third image acquisition device are obtained. In case the second key point corresponds to the ear of the first user, the coordinates of the ear of the first user in the coordinate system of the third image acquisition device are obtained.
In the embodiment of the application, after personalized head modeling is completed, only the personalized head model and the image shot by the single image acquisition device are needed, and the three-dimensional position information of the target part can be obtained by using an N-point perspective problem solving algorithm, so that the requirements on computing resources and energy consumption are effectively reduced under the condition that the positioning accuracy of the target part is ensured. In addition, the double-camera data is not required to be continuously provided, and the overall complexity and the energy consumption of the model are further reduced.
In a possible implementation manner, after the target location in the second user is located based on the personalized model corresponding to the identity information, the method further includes: acquiring a first image and a second image of the second user through the first image acquisition device and the second image acquisition device respectively; optimizing the personalized model corresponding to the identity information based on the first image and the second image of the second user to obtain an optimized model of the second user; and updating the personalized model corresponding to the identity information stored in the local and/or cloud to be an optimized model of the second user.
After the target part in the second user is positioned, the current personalized model (namely, the personalized model corresponding to the identity information) of the second user can be optimized, so that the accuracy is improved, on one hand, the local positioning time is saved, and on the other hand, the accuracy of next positioning is improved.
The following describes the location positioning method provided in the present application by using two application examples of example 1 and example 2, where example 1 and example 2 are only examples to more clearly describe the location positioning process, and are not limited to the location positioning method provided in the embodiment of the present application.
Example 1
First, the camera intrinsic parameters of the DMS camera and the CMS camera in the vehicle, and the relative extrinsic parameters of the two are calibrated. After the user a enters the vehicle cabin to sit, the vehicle shoots the user a through the DMS camera and the CMS camera, and obtains two RGB images (i.e., a first image and a second image of the user a) with different viewing angles at similar times (i.e., a difference between the acquisition times is less than or equal to a first threshold, for example, may be the same time). And then, the vehicle optimizes the universal model based on the first image and the second image of the user A to obtain a personalized model of the user A. Finally, the vehicle shoots the user A through the DMS camera or the CMS camera to obtain a new RGB image (namely a third image of the user A), and positions the target part of the user A based on the third image of the user A and the individual sexualization model of the user A.
Example 2
Firstly, calibrating camera internal parameters of a camera 1 and a camera 2 in a collection bin and relative external parameters between the camera 1 and the camera 2. After the user A enters a designated position in the acquisition bin to sit, the acquisition bin shoots the user A through the camera 1 and the camera 2 to obtain two RGB images (namely a first image and a second image of the user A) with different visual angles at similar moments. And then, the acquisition bin optimizes the universal model based on the first image and the second image of the user A to obtain a personalized model of the user A. And then, the acquisition bin stores the identity information and the personalized model of the user A in a cloud end association mode.
After the user A enters the vehicle cabin to sit, the vehicle acquires the personalized model of the user A from the cloud based on the identity information of the user A. Then, the vehicle shoots the user A through the DMS camera or the CMS camera to obtain a new RGB image (namely a third image of the user A), and positions the target part of the user A based on the third image of the user A and the individual sexual models of the user A.
Fig. 6 shows a schematic structural diagram of a location positioning device according to an embodiment of the present application. As shown in fig. 6, the site locating apparatus 60 may include:
the first acquisition module 61 is configured to acquire, when a first user is detected, a first image and a second image of the first user through a first image acquisition device and a second image acquisition device, where the first user represents a user who needs to perform model optimization currently, the first image and the second image have different viewing angles relative to the first user, and a difference between acquisition moments of the first image and the second image of the first user is less than or equal to a first threshold;
the first optimization module 62 is configured to optimize a model to be optimized of the first user based on the first image and the second image of the first user, so as to obtain a personalized model of the first user;
The first positioning module 63 is configured to position a target location in the first user based on the personalized model of the first user.
In the embodiment of the application, the personalized human head model of the first user is obtained based on the images with different visual angles and similar acquisition moments, so that the target part in the first user is positioned under the condition that the depth measuring device is not configured, the cost is saved, and the energy consumption is reduced.
In one possible implementation, the apparatus further includes:
the second acquisition module is used for acquiring the identity information of the user under the condition that the user is detected, wherein the identity information is used for identifying the unique user;
the searching module is used for searching the personalized model corresponding to the identity information;
the first determining module is used for determining that the detected user is the first user under the condition that the personalized model corresponding to the identity information is not found, and determining a universal model of each user as the model to be optimized;
in one possible implementation, the apparatus further includes:
the second determining module is used for determining that the detected user is a second user under the condition that the personalized model corresponding to the identity information is found, and the second user represents the user who does not need model optimization currently;
And the second positioning module is used for positioning the target part in the second user based on the personalized model corresponding to the identity information.
In one possible implementation, the apparatus further includes:
the third determining module is used for determining that the detected user is the first user and determining the personalized model corresponding to the identity information as the model to be optimized under the condition that the personalized model corresponding to the identity information is found and a model optimization instruction is received;
a fourth determining module, configured to determine, when a personalized model corresponding to the identity information is found and a model optimization instruction is not received, that the detected user is a second user, where the second user represents a user that does not currently need model optimization;
and the third positioning module is used for positioning the target part in the second user based on the personalized model corresponding to the identity information.
In a possible implementation manner, after the target location in the second user is located based on the personalized model corresponding to the identity information, the apparatus further includes:
The third acquisition module is used for acquiring a first image and a second image of the second user through the first image acquisition device and the second image acquisition device respectively;
the second optimization module is used for optimizing the personalized model corresponding to the identity information based on the first image and the second image of the second user to obtain an optimized model of the second user;
and the updating module is used for updating the stored personalized model corresponding to the identity information into the optimized model of the second user.
In one possible implementation, the apparatus further includes:
and the storage module is used for storing the identity information and the personalized model of the first user in a correlated mode.
In one possible implementation manner, the first optimization module is further configured to:
determining a first coordinate of a first key point in a coordinate system of the first image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a first image of the first user, wherein the first key point is used for representing the key point of the first user in the model to be optimized;
Determining a first coordinate of the first key point in a coordinate system of the second image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a second image of the first user;
converting a first coordinate of the first key point in the coordinate system of the first image acquisition device to a second coordinate of the first key point in the coordinate system of the second image acquisition device based on a space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, so as to obtain a second coordinate of the first key point in the coordinate system of the second image acquisition device;
and optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device to obtain a personalized model of the first user.
In a possible implementation manner, according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device, the model to be optimized is optimized, so as to obtain a personalized model of the first user, which includes:
Determining a loss function according to the difference between a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device;
and optimizing the scale parameters and the shape parameters of the model to be optimized so that the loss function meets a preset threshold.
In one possible implementation, the apparatus further includes:
and a fifth determining module, configured to determine a spatial conversion relationship between the coordinate system of the first image capturing device and the coordinate system of the second image capturing device according to the coordinates of the calibration point in the world coordinate system in the coordinate system of the first image capturing device and the coordinates in the coordinate system of the second image capturing device.
In one possible implementation, the first positioning module is further configured to:
acquiring a third image of the first user by a third image acquisition device;
and determining the coordinates of a second key point in the coordinate system of the third image acquisition device according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image of the first user, wherein the second key point is used for representing a point corresponding to the target part of the first user in the personalized model.
Fig. 7 shows a schematic structural diagram of an electronic device according to an embodiment of the present application. As shown in fig. 7, the electronic device may include at least one processor 13, a memory 302, an input output device 303, and a bus 304. The following describes the respective constituent elements of the electronic device in detail with reference to fig. 7:
the processor 13 is a control center of the electronic device, and may be one processor or a collective term of a plurality of processing elements. For example, the processor 13 is a CPU, but may also be an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits configured to implement embodiments of the present disclosure, such as: one or more microprocessors (Digital Signal Processor, DSPs), or one or more field programmable gate arrays (Field Programmable Gate Array, FPGAs).
The processor 13 may, among other things, perform various functions of the electronic device by running or executing software programs stored in the memory 302 and invoking data stored in the memory 302.
In a specific implementation, the processor 13 may include one or more CPUs, such as CPU0 and CPU1 shown in the figures, as an example.
In a particular implementation, as one embodiment, the electronic device may include multiple processors (not shown). Each of these processors may be a single-core processor (single-CPU) or a multi-core processor (multi-CPU). A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
Memory 302 may be, but is not limited to, read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, random access Memory (Random Access Memory, RAM) or other type of dynamic storage device that can store information and instructions, but may also be electrically erasable programmable Read-Only Memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), compact disc Read-Only Memory (Compact Disc Read-Only Memory) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory 302 may be stand alone and may be coupled to the processor 13 via a bus 304. Memory 302 may also be integrated with processor 13.
An input output device 303 for communicating with other devices or communication networks. Such as for communication with an ethernet, radio access network (Radio access network, RAN), wireless local area network (Wireless Local Area Networks, WLAN), etc. The input output device 303 may include all or part of a baseband processor and may also optionally include a Radio Frequency (RF) processor. The RF processor is used for receiving and transmitting RF signals, and the baseband processor is used for realizing the processing of the baseband signals converted by the RF signals or the baseband signals to be converted into the RF signals.
In a specific implementation, as an embodiment, the input output device 303 may include a transmitter and a receiver. Wherein the transmitter is used for transmitting signals to other devices or communication networks, and the receiver is used for receiving signals transmitted by other devices or communication networks. The transmitter and receiver may be independent or may be integrated.
Bus 304 may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
The device structure shown in fig. 7 is not limiting of the electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The embodiment of the application also provides a model training system. The system comprises a first image acquisition device, a second image acquisition device and a processor.
The first image acquisition device is used for acquiring a first image of a first user, and the first user represents a user needing model optimization currently; the second image acquisition device is used for acquiring a second image of the first user, the first image and the second image are different in view angle relative to the first user, and the difference value of the acquisition time of the first image and the second image is in a first threshold range; the processor is used for optimizing the model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user.
The embodiment of the application also provides a vehicle with the system shown in fig. 1a, wherein the vehicle can be a household sedan or a truck, and the like, and also can be a special vehicle, such as an ambulance, a fire truck, a police car, an engineering emergency car, and the like. The modules of the system can be arranged in a vehicle system in a pre-assembled or post-assembled mode, wherein the modules can carry out data interaction depending on a bus or an interface circuit of the vehicle, or along with the development of wireless technology, the modules can also carry out data interaction in a wireless communication mode so as to eliminate inconvenience brought by wiring.
Embodiments of the present application provide a location positioning device, comprising: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to implement the above-described method when executing the instructions.
Embodiments of the present application provide a non-transitory computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
Embodiments of the present application provide a computer program product comprising a computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in a processor of an electronic device, performs the above method.
The computer readable storage medium may be a tangible device that can hold and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer-readable storage medium would include the following: portable computer disk, hard disk, random Access Memory (Random Access Memory, RAM), read Only Memory (ROM), erasable programmable Read Only Memory (Electrically Programmable Read Only Memory, EPROM) or flash Memory), static Random-Access Memory (SRAM), portable compact disk Read Only Memory (Compact Disc Read-Only Memory, CD-ROM), digital versatile disk (Digital Video Disc, DVD), memory stick, floppy disk, mechanical coding devices such as punch cards or in-groove bump structures having instructions stored thereon, and any suitable combination of the foregoing.
The computer readable program instructions or code described herein may be downloaded from a computer readable storage medium to a respective computing/processing device or to an external computer or external storage device over a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmissions, wireless transmissions, routers, firewalls, switches, gateway computers and/or edge servers. The network interface card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present application may be assembly instructions, instruction set architecture (Instruction Set Architecture, ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, c++ or the like and conventional procedural programming languages, such as the "C" language or similar programming languages. The computer readable program instructions may be executed entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a local area network (Local Area Network, LAN) or a wide area network (Wide Area Network, WAN), or it may be connected to an external computer (e.g., through the internet using an internet service provider). In some embodiments, aspects of the present application are implemented by personalizing electronic circuitry, such as programmable logic circuitry, field programmable gate arrays (Field-Programmable Gate Array, FPGA), or programmable logic arrays (Programmable Logic Array, PLA), with state information of computer readable program instructions.
Various aspects of the present application are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable medium having the instructions stored therein includes an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by hardware (e.g., circuits or ASICs (Application Specific Integrated Circuit, application specific integrated circuits)) which perform the corresponding functions or acts, or combinations of hardware and software, such as firmware, etc.
Although the present application has been described herein in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the figures, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The embodiments of the present application have been described above, the foregoing description is exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope of the various embodiments described. The terminology used herein was chosen in order to best explain the principles of the embodiments, the practical application, or the improvement of technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (23)
- A method of locating a site, the method comprising:under the condition of detecting a first user, respectively acquiring a first image and a second image of the first user through a first image acquisition device and a second image acquisition device, wherein the first user represents a user needing model optimization at present, the first image and the second image are different in view angle relative to the first user, and the difference value between the acquisition moments of the first image and the second image of the first user is smaller than or equal to a first threshold value;optimizing a model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user;and positioning a target part in the first user based on the personalized model of the first user.
- The method according to claim 1, wherein the method further comprises:under the condition that a user is detected, acquiring identity information of the user, wherein the identity information is used for identifying a unique user;searching a personalized model corresponding to the identity information;and under the condition that the personalized model corresponding to the identity information is not found, determining the detected user as the first user, and determining the universal model of each user as the model to be optimized.
- The method according to claim 2, wherein the method further comprises:under the condition that a personalized model corresponding to the identity information is found, determining that the detected user is a second user, wherein the second user represents a user which does not need model optimization currently;and positioning a target part in the second user based on the personalized model corresponding to the identity information.
- The method according to claim 2, wherein the method further comprises:under the condition that a personalized model corresponding to the identity information is found and a model optimization instruction is received, determining the detected user as the first user, and determining the personalized model corresponding to the identity information as the model to be optimized;under the condition that a personalized model corresponding to the identity information is found and a model optimization instruction is not received, determining that the detected user is a second user, wherein the second user represents a user which does not need model optimization currently;and positioning a target part in the second user based on the personalized model corresponding to the identity information.
- The method according to claim 3 or 4, wherein after locating the target site in the second user based on the personalized model corresponding to the identity information, the method further comprises:Acquiring a first image and a second image of the second user through the first image acquisition device and the second image acquisition device respectively;optimizing the personalized model corresponding to the identity information based on the first image and the second image of the second user to obtain an optimized model of the second user;updating the stored personalized model corresponding to the identity information into an optimized model of the second user.
- The method according to any one of claims 2 to 5, wherein after said deriving a personalized model of said first user, the method further comprises:and storing the identity information and the personalized model of the first user in a correlation way.
- The method according to any one of claims 1 to 6, wherein optimizing the model to be optimized for the first user based on the first image and the second image of the first user, to obtain a personalized model for the first user, comprises:determining a first coordinate of a first key point in a coordinate system of the first image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a first image of the first user, wherein the first key point is used for representing the key point of the first user in the model to be optimized;Determining a first coordinate of the first key point in a coordinate system of the second image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a second image of the first user;converting a first coordinate of the first key point in the coordinate system of the first image acquisition device to a second coordinate of the first key point in the coordinate system of the second image acquisition device based on a space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, so as to obtain a second coordinate of the first key point in the coordinate system of the second image acquisition device;and optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device to obtain a personalized model of the first user.
- The method of claim 7, wherein optimizing the model to be optimized according to the first coordinate and the second coordinate of the first key point in the coordinate system of the second image acquisition device to obtain the personalized model of the first user comprises:Determining a loss function according to the difference between a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device;and optimizing the scale parameters and the shape parameters of the model to be optimized so that the loss function meets a preset threshold.
- The method according to claim 7 or 8, characterized in that the method further comprises:and determining the space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device according to the coordinates of the standard point in the world coordinate system in the coordinate system of the first image acquisition device and the coordinates in the coordinate system of the second image acquisition device.
- The method according to any one of claims 6 to 9, wherein locating a target site in the first user based on the personalized model of the first user comprises:acquiring a third image of the first user by a third image acquisition device;and determining the coordinates of a second key point in the coordinate system of the third image acquisition device according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image of the first user, wherein the second key point is used for representing a point corresponding to the target part of the first user in the personalized model.
- A site locating apparatus, the apparatus comprising:the first acquisition module is used for respectively acquiring a first image and a second image of a first user through the first image acquisition device and the second image acquisition device under the condition of detecting the first user, wherein the first user represents a user needing model optimization at present, the first image and the second image are different in view angle relative to the first user, and the difference value between the acquisition moments of the first image of the first user and the second image of the first user is smaller than or equal to a first threshold value;the first optimization module is used for optimizing the model to be optimized of the first user based on the first image and the second image of the first user to obtain a personalized model of the first user;and the first positioning module is used for positioning the target part in the first user based on the personalized model of the first user.
- The apparatus of claim 11, wherein the apparatus further comprises:the second acquisition module is used for acquiring the identity information of the user under the condition that the user is detected, wherein the identity information is used for identifying the unique user;The searching module is used for searching the personalized model corresponding to the identity information;the first determining module is used for determining that the detected user is the first user under the condition that the personalized model corresponding to the identity information is not found, and determining the universal model of each user as the model to be optimized.
- The apparatus of claim 12, wherein the apparatus further comprises:the second determining module is used for determining that the detected user is a second user under the condition that the personalized model corresponding to the identity information is found, and the second user represents the user who does not need model optimization currently;and the second positioning module is used for positioning the target part in the second user based on the personalized model corresponding to the identity information.
- The apparatus of claim 12, wherein the apparatus further comprises:the third determining module is used for determining that the detected user is the first user and determining the personalized model corresponding to the identity information as the model to be optimized under the condition that the personalized model corresponding to the identity information is found and a model optimization instruction is received;A fourth determining module, configured to determine, when a personalized model corresponding to the identity information is found and a model optimization instruction is not received, that the detected user is a second user, where the second user represents a user that does not currently need model optimization;and the third positioning module is used for positioning the target part in the second user based on the personalized model corresponding to the identity information.
- The apparatus according to claim 13 or 14, wherein after locating a target site in the second user based on the personalized model corresponding to the identity information, the apparatus further comprises:the third acquisition module is used for acquiring a first image and a second image of the second user through the first image acquisition device and the second image acquisition device respectively;the second optimization module is used for optimizing the personalized model corresponding to the identity information based on the first image and the second image of the second user to obtain an optimized model of the second user;and the updating module is used for updating the stored personalized model corresponding to the identity information into the optimized model of the second user.
- The apparatus according to any one of claims 12 to 15, further comprising:and the storage module is used for storing the identity information and the personalized model of the first user in a correlated mode.
- The apparatus of any one of claims 11 to 16, wherein the first optimization module is further configured to:determining a first coordinate of a first key point in a coordinate system of the first image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a first image of the first user, wherein the first key point is used for representing the key point of the first user in the model to be optimized;determining a first coordinate of the first key point in a coordinate system of the second image acquisition device according to the coordinate of the first key point in a world coordinate system and the coordinate of the first key point in a coordinate system of a second image of the first user;converting a first coordinate of the first key point in the coordinate system of the first image acquisition device to a second coordinate of the first key point in the coordinate system of the second image acquisition device based on a space conversion relation between the coordinate system of the first image acquisition device and the coordinate system of the second image acquisition device, so as to obtain a second coordinate of the first key point in the coordinate system of the second image acquisition device;And optimizing the model to be optimized according to a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device to obtain a personalized model of the first user.
- The apparatus of claim 17, wherein optimizing the model to be optimized based on the first coordinate and the second coordinate of the first keypoint in the coordinate system of the second image acquisition device to obtain the personalized model of the first user comprises:determining a loss function according to the difference between a first coordinate and a second coordinate of the first key point in a coordinate system of the second image acquisition device;and optimizing the scale parameters and the shape parameters of the model to be optimized so that the loss function meets a preset threshold.
- The apparatus according to claim 17 or 18, characterized in that the apparatus further comprises:and a fifth determining module, configured to determine a spatial conversion relationship between the coordinate system of the first image capturing device and the coordinate system of the second image capturing device according to the coordinates of the calibration point in the world coordinate system in the coordinate system of the first image capturing device and the coordinates in the coordinate system of the second image capturing device.
- The apparatus of any one of claims 16 to 19, wherein the first positioning module is further configured to:acquiring a third image of the first user by a third image acquisition device;and determining the coordinates of a second key point in the coordinate system of the third image acquisition device according to the coordinates of the second key point in the world coordinate system and the coordinates of the second key point in the coordinate system of the third image of the first user, wherein the second key point is used for representing a point corresponding to the target part of the first user in the personalized model.
- An electronic device, comprising:a processor;a memory for storing processor-executable instructions;wherein the processor is configured to implement the method of any one of claims 1 to 10 when executing the instructions.
- A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the method of any of claims 1 to 10.
- A computer program product comprising computer readable code, or a non-transitory computer readable storage medium carrying computer readable code, which when run in an electronic device, a processor in the electronic device performs the method of any one of claims 1 to 10.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2022/093271 WO2023220916A1 (en) | 2022-05-17 | 2022-05-17 | Part positioning method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117441190A true CN117441190A (en) | 2024-01-23 |
Family
ID=88834365
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202280005358.6A Pending CN117441190A (en) | 2022-05-17 | 2022-05-17 | Position positioning method and device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN117441190A (en) |
WO (1) | WO2023220916A1 (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109697389B (en) * | 2017-10-23 | 2021-10-01 | 北京京东尚科信息技术有限公司 | Identity recognition method and device |
CN112149461A (en) * | 2019-06-27 | 2020-12-29 | 中科智云科技有限公司 | Method and apparatus for identifying an object |
CN111814733A (en) * | 2020-07-23 | 2020-10-23 | 深圳壹账通智能科技有限公司 | Concentration degree detection method and device based on head posture |
WO2022165809A1 (en) * | 2021-02-07 | 2022-08-11 | 华为技术有限公司 | Method and apparatus for training deep learning model |
CN113936335B (en) * | 2021-10-11 | 2022-08-23 | 苏州爱果乐智能家居有限公司 | Intelligent sitting posture reminding method and device |
-
2022
- 2022-05-17 CN CN202280005358.6A patent/CN117441190A/en active Pending
- 2022-05-17 WO PCT/CN2022/093271 patent/WO2023220916A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2023220916A1 (en) | 2023-11-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11042723B2 (en) | Systems and methods for depth map sampling | |
CN106650705B (en) | Region labeling method and device and electronic equipment | |
US20190266751A1 (en) | System and method for identifying a camera pose of a forward facing camera in a vehicle | |
JP2019024196A (en) | Camera parameter set calculating apparatus, camera parameter set calculating method, and program | |
CN109683699B (en) | Method and device for realizing augmented reality based on deep learning and mobile terminal | |
WO2022241638A1 (en) | Projection method and apparatus, and vehicle and ar-hud | |
CN111625091B (en) | Label overlapping method and device based on AR glasses | |
CN111062873A (en) | Parallax image splicing and visualization method based on multiple pairs of binocular cameras | |
US11380111B2 (en) | Image colorization for vehicular camera images | |
US10013761B2 (en) | Automatic orientation estimation of camera system relative to vehicle | |
CN111854620B (en) | Monocular camera-based actual pupil distance measuring method, device and equipment | |
WO2023272453A1 (en) | Gaze calibration method and apparatus, device, computer-readable storage medium, system, and vehicle | |
WO2022257120A1 (en) | Pupil position determination method, device and system | |
CN111288956B (en) | Target attitude determination method, device, equipment and storage medium | |
CN114913290A (en) | Multi-view-angle fusion scene reconstruction method, perception network training method and device | |
CN111405263A (en) | Method and system for enhancing head-up display by combining two cameras | |
CN111798521A (en) | Calibration method, calibration device, storage medium and electronic equipment | |
WO2017043331A1 (en) | Image processing device and image processing method | |
CN116486351A (en) | Driving early warning method, device, equipment and storage medium | |
US10991155B2 (en) | Landmark location reconstruction in autonomous machine applications | |
CN116363185B (en) | Geographic registration method, geographic registration device, electronic equipment and readable storage medium | |
CN112115737B (en) | Vehicle orientation determining method and device and vehicle-mounted terminal | |
CN114821544B (en) | Perception information generation method and device, vehicle, electronic equipment and storage medium | |
JP6977725B2 (en) | Image processing device and image processing method | |
CN117441190A (en) | Position positioning method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |