WO2024017045A1 - 定位方法、系统及电子设备 - Google Patents

定位方法、系统及电子设备 Download PDF

Info

Publication number
WO2024017045A1
WO2024017045A1 PCT/CN2023/105343 CN2023105343W WO2024017045A1 WO 2024017045 A1 WO2024017045 A1 WO 2024017045A1 CN 2023105343 W CN2023105343 W CN 2023105343W WO 2024017045 A1 WO2024017045 A1 WO 2024017045A1
Authority
WO
WIPO (PCT)
Prior art keywords
spots
light sources
image
target object
game controller
Prior art date
Application number
PCT/CN2023/105343
Other languages
English (en)
French (fr)
Inventor
庄开元
谢鹏
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024017045A1 publication Critical patent/WO2024017045A1/zh

Links

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/24Constructional details thereof, e.g. game controllers with detachable joystick handles

Definitions

  • Embodiments of the present application relate to the field of data processing, and in particular, to a positioning method, system and electronic device.
  • VR Virtual Reality, virtual reality
  • VR equipment has gradually entered the user's field of vision; for example, VR glasses, VR all-in-one machines, etc.; these VR equipment can bring immersive experience to users watching movies, playing games, etc. , gradually favored by users.
  • this application provides a positioning method, system and electronic device. This method can quickly locate the target object.
  • embodiments of the present application provide a positioning method.
  • the method includes: first, obtaining an image taken of a target object to be positioned. N1 light sources with different spectra are arranged on the target object.
  • the image contains N2 spots. N2 spots correspond to N2 light sources, N1 and N2 are both integers greater than 2, and N2 is less than or equal to N1; then, according to the color information of the N2 spots contained in the image, the light sources corresponding to the N2 spots are determined from the N1 light sources. ; Then, determine the pose of the target object based on the N2 spots and the light sources corresponding to the N2 spots respectively.
  • it needs to be performed multiple times In terms of positioning, only one positioning calculation can be performed to determine the pose of the target object, which saves calculation time and improves positioning efficiency.
  • the target object when the target object is a game controller in a VR scene, the response time of user actions can be reduced and the user's VR experience can be improved.
  • the target object is an object in the object special effects production scene, it can save time in making special effects videos.
  • N1 light sources with different spectra emit light with different wavelengths, that is, the N1 light sources (or the light emitted by the light sources) have different color information.
  • the pose of the target object may be a 6Dof pose.
  • an image taken of the target object to be positioned can be obtained, and then the positioning method of the present application is used to determine a pose based on this image; then, the pose can be determined as the pose of the target object. .
  • multiple images taken of the target object to be positioned can be acquired (the time difference between any two of these multiple images is less than a preset time length, such as 100ms); and then the positioning method of this application is used, based on the multiple images.
  • Each image determines a pose, and multiple poses can be obtained.
  • the optimal pose can be selected from these multiple poses as the pose of the target object.
  • the light source includes: performing spot detection on the image to determine N2 spots; determining the color information of the N2 spots; finding a preset relationship based on the color information of the N2 spots to determine the light sources corresponding to the N2 spots from the N1 light sources ; Among them, one spot corresponds to one light source, and the preset relationship includes the relationship between N1 light sources and the color information of N1 light sources.
  • the preset relationship may include the relationship between the light source identifiers of the N1 light sources and the colors of the N1 light sources; the color information of the spots may refer to colors (can be based on the RGB values of the spots and the RGB values of various preset colors the distance between them, determines the color of the spot).
  • the preset relationship may include the relationship between the light source identifiers of the N1 light sources and the colors of the N1 light sources; the color information of the spots may refer to colors (can be based on the RGB values of the spots and the RGB values of various preset colors the distance between them, determines the color of the spot).
  • the preset relationship may include the relationship between the light source identifiers of the N1 light sources and the RGB values of the N1 light sources; the color information of the spots may refer to the RGB values.
  • the RGB value closest to the RGB value of the spot can be found from the preset relationship, and the light source corresponding to the RGB value closest to the RGB value of the spot can be determined as the light source corresponding to the spot; further, N1 can be accurately selected from Determine the light sources corresponding to N2 spots in the light source.
  • determining the pose of the target object according to the N2 spots and the light sources corresponding to the N2 spots respectively includes: determining the pixel coordinates of the center point corresponding to the N2 spots respectively. ; Determine the pose of the target object based on the pixel coordinates of the center point corresponding to the N2 spots and the position layout information of the light source corresponding to the N2 spots.
  • the PNP (Perspective-n-Point) algorithm can be used to calculate the pose of the target object.
  • the PNP algorithm is a method for solving the motion of 3D to 2D point pairs.
  • the P3P (Perspective-3-Point) algorithm can be used, which is a method that uses 3 positioning points to solve the motion of 3D to 2D point pairs to calculate the pose of the target object.
  • the method is applied in a virtual reality VR game scene or an augmented reality AR game scene, and the target object includes a game controller.
  • this method can also be applied to object special effects production scenarios, where the target objects include objects with N1 light sources with different spectra.
  • the light source is a point light source.
  • the N1 point light sources may be LED (light-emitting diode, light-emitting diode) point light sources.
  • inventions of the present application provide a positioning system.
  • the system includes: a first device and a game controller connected to the first device; the first device is a VR device or an AR (Augmented Reality, augmented reality) device.
  • the device includes an image acquisition module; the game controller is equipped with N1 light sources with different spectra, and N1 is an integer greater than 2;
  • the first device is used to obtain the image captured by the image acquisition module for the game controller.
  • the image contains N2 spots, which correspond to N2 light sources.
  • N2 is an integer greater than 2, and N2 is less than or equal to N1; according to the N2 contained in the image
  • the light sources corresponding to the N2 spots are determined from the N1 light sources; the pose of the target object is determined based on the N2 spots and the light sources corresponding to the N2 spots.
  • the image acquisition module includes a color filter corresponding to the spectra of the N1 light sources.
  • the VR device is an all-in-one VR machine; the AR device is an all-in-one AR machine.
  • the game controller is equipped with N1 LED (light-emitting diode, light-emitting diode) point light sources with different spectra.
  • N1 LED light-emitting diode, light-emitting diode
  • the second aspect and any implementation manner of the second aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the second aspect and any implementation manner of the second aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • embodiments of the present application provide a positioning system, which includes: a first device, a second device connected to the first device, a visual module and a game controller connected to the second device; the first device is a terminal device, the second device is a VR device or an AR device; the visual module includes an image acquisition module, and the game controller is equipped with N1 light sources with different spectra, and N1 is an integer greater than 2;
  • the visual module is used to call the image acquisition module to capture images for the game controller and send the images to the second device.
  • N2 spots correspond to N2 light sources, N2 is an integer greater than 2, and N2 is less than or equal to N1;
  • a second device configured to forward the image to the first device
  • the first device is used to receive an image, and the image contains N2 spots; according to the color information of the N2 spots contained in the image, determine the light sources corresponding to the N2 spots from the N1 light sources; according to the N2 spots and N2 spots, respectively
  • the light source determines the pose of the target object.
  • the image acquisition module includes a color filter corresponding to the spectra of the N1 light sources.
  • the VR device is VR glasses; the AR device is AR glasses.
  • the game controller is equipped with N1 light-emitting diode LED point light sources with different spectra.
  • the third aspect and any implementation manner of the third aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the third aspect and any implementation manner of the third aspect please refer to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • a positioning device which includes:
  • the image acquisition module is used to acquire images taken of the target object to be positioned.
  • N1 light sources with different spectra are arranged on the target object.
  • the image contains N2 spots.
  • the N2 spots correspond to the N2 light sources. Both N1 and N2 are greater than An integer of 2, N2 is less than or equal to N1;
  • the light source determination module is used to determine the light sources corresponding to the N2 spots from the N1 light sources based on the color information of the N2 spots contained in the image;
  • the positioning module is used to determine the pose of the target object based on the N2 spots and the light sources corresponding to the N2 spots.
  • the fourth aspect and any implementation manner of the fourth aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the fourth aspect and any implementation manner of the fourth aspect please refer to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • embodiments of the present application provide an electronic device, including: a memory and a processor, the memory is coupled to the processor; the memory stores program instructions, and when the program instructions are executed by the processor, the electronic device executes the first aspect or The positioning method in any possible implementation of the first aspect.
  • the fifth aspect and any implementation manner of the fifth aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the fifth aspect and any implementation manner of the fifth aspect please refer to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, which will not be described again here.
  • embodiments of the present application provide a chip, including one or more interface circuits and one or more processors; the interface circuit is used to receive signals from the memory of the electronic device and send signals to the processor, and the signals include the memory computer instructions stored in; when the processor executes the computer instructions, the electronic device is caused to perform the positioning method in the first aspect or any possible implementation of the first aspect.
  • the sixth aspect and any implementation manner of the sixth aspect are respectively the same as the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the sixth aspect and any implementation manner of the sixth aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and will not be described again here.
  • embodiments of the present application provide a computer-readable storage medium.
  • the computer-readable storage medium stores a computer program.
  • the computer program When the computer program is run on a computer or processor, it causes the computer or processor to execute the first aspect or the second aspect. Positioning method in any possible implementation on the one hand.
  • the seventh aspect and any implementation manner of the seventh aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the seventh aspect and any implementation manner of the seventh aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and will not be described again here.
  • inventions of the present application provide a computer program product.
  • the computer program product includes a software program.
  • the software program When executed by a computer or a processor, it causes the computer or processor to execute the first aspect or any possible method of the first aspect. Positioning method in implementation.
  • the eighth aspect and any implementation manner of the eighth aspect respectively correspond to the first aspect and any implementation manner of the first aspect.
  • the technical effects corresponding to the eighth aspect and any implementation manner of the eighth aspect may be referred to the technical effects corresponding to the above-mentioned first aspect and any implementation manner of the first aspect, and will not be described again here.
  • Figure 1a is a schematic diagram of an exemplary application scenario
  • Figure 1b is a schematic diagram of an exemplary game controller
  • Figure 1c is a schematic diagram of an exemplary application scenario
  • Figure 2a is a schematic diagram of an exemplary positioning process
  • Figure 2b is a schematic diagram of an image taken for a game controller
  • Figure 3 is a schematic diagram of an exemplary positioning process
  • Figure 4a is a schematic diagram of an exemplary positioning process
  • Figure 4b is a schematic diagram of an exemplary pose solution method
  • Figure 5 is a schematic diagram of an exemplary positioning system
  • Figure 6 is a schematic diagram of an exemplary positioning system
  • Figure 7 is a schematic diagram of an exemplary positioning device
  • Figure 8 is a schematic structural diagram of an exemplary device.
  • a and/or B can mean: A exists alone, A and B exist simultaneously, and they exist alone. B these three situations.
  • first and second in the description and claims of the embodiments of this application are used to distinguish different objects, rather than to describe a specific order of objects.
  • first target object, the second target object, etc. are used to distinguish different target objects, rather than to describe a specific order of the target objects.
  • multiple processing units refer to two or more processing units; multiple systems refer to two or more systems.
  • Figure 1a is a schematic diagram of an exemplary application scenario.
  • Figure 1a is an exemplary VR game scene.
  • VR equipment can include: VR equipment (VR glasses), game controllers and visual components, as shown in Figure 1a.
  • VR glasses VR equipment
  • game controllers 2 pieces
  • visual components VR glasses
  • the user can wear the mobile phone on the waist, wear VR glasses on the head and hold the game controller as shown in Figure 1a. After starting the game, the VR game can be started.
  • VR glasses and mobile phones can also be connected wirelessly, and this application does not limit this.
  • VR glasses can also be connected to other terminal devices such as tablet computers, personal notebooks, etc. This application does not limit this.
  • the visual component and the VR glasses may be independent of each other, or the visual component may be a part of the VR glasses (that is, the visual component and the VR glasses are integrated), and this application does not limit this.
  • VR equipment shown in Figure 1a is only an example of the VR equipment of this application, and this application does not limit the components included in the VR equipment.
  • the game controller needs to be positioned to follow the user's hand movements in the game and respond according to the user's hand movements to achieve game interaction.
  • N1 N1 is an integer greater than 2
  • light sources with different spectra can be arranged on the game controller.
  • the image of the game controller can be captured by a VR device (such as a VR all-in-one machine) or a visual component; then the VR device (such as a VR all-in-one machine) or terminal device can capture the image based on the light source arranged on the game handle and the captured image. The spots in the image corresponding to the light source are used to position the game controller.
  • Figure 1b is a schematic diagram of an exemplary game controller.
  • N1 light sources can be arranged on the outer wall of the annular component of the game handle (only 7 light sources are shown in Figure 1b: light source 1, light source 2, light source 3, light source 4, light source 5, light source 6 and light source 7).
  • the ways of arranging the light sources on the outer wall of the annular component of the game handle can include a variety of ways, which can be set according to the needs; that is, the number of light sources N1 and the positions of N1 light sources arranged on the outer wall of the annular component of the game handle, All can be arranged according to needs, and this application does not impose restrictions on this.
  • the position of the light source arranged on the game controller can be determined with the goal of capturing as many light sources as possible from various angles.
  • Figure 1b is only an example of the light source layout on the game controller and does not represent the actual light source layout of the game controller.
  • Figure 1a is only an example of the application scenario of this application. This application can also be applied to other VR/AR scenarios using game controllers, such as VR/AR sports scenes, etc. This application has limitations.
  • Figure 1c is a schematic diagram of an exemplary application scenario. In the embodiment of Figure 1c, a scene for creating object special effects is shown.
  • N1 light sources with different spectra can be arranged on part of the object (such as a ship) (such as the bow, etc.), as shown in the circle in Figure 1c.
  • part of the object can be positioned based on the light source arranged on the object and the spots corresponding to the light source in the captured video image.
  • This application uses the positioning of a game controller in a VR game scene as an example to illustrate.
  • Figure 2a is a schematic diagram illustrating an exemplary positioning process.
  • N1 light sources with different spectra are arranged on the target object.
  • the image contains N2 spots.
  • the N2 spots correspond to the N2 light sources.
  • N1 and N2 are both integers greater than 2.
  • N2 is less than or equal to N1.
  • the object to be positioned can be a game controller equipped with N1 light sources with different spectra in a VR game scene, or it can This application does not limit the placement of objects with N1 light sources with different spectra in the object special effects production scene.
  • the N1 light sources may be N1 point light sources.
  • the N1 point light sources may be LED (light-emitting diode, light-emitting diode) point light sources.
  • the spectra of the N1 light sources are all different, that is to say, the wavelengths of the light emitted by the N1 light sources are different, that is, the color information of the N1 light sources (or the light emitted by the light sources) is different.
  • N1 7
  • the color information of light source 1, light source 2, light source 3, light source 4, light source 5, light source 6 and light source 7 are respectively: red, orange, yellow, green, blue, indigo and violet.
  • the light source will be projected on the imaging plane of the camera, and then the blob corresponding to the light source will be captured in the image taken for the light source; therefore, the image taken for the target object to be positioned may include multiple blobs. Since during the process of photographing the target object to be positioned, only part of the light sources arranged on the target object may be photographed (assuming that the number is N2, N2 is less than or equal to, and N2 is an integer greater than 2); then, for the target object to be positioned, The captured image may include N2 spots, and these N2 spots correspond to N2 light sources.
  • a visual component connected to a VR device can capture an image of the target object to be positioned, and then send the captured image to the terminal device through the VR device; the terminal device executes the steps of this application
  • the positioning method is to execute S201 ⁇ S203.
  • a VR device (all-in-one VR machine) can capture an image of the target object to be positioned, and the VR device executes the positioning method of the present application, that is, executing S201 to S203.
  • the color information of the light emitted by the N1 light sources is different, the color information of the N2 spots in the image taken for the target object to be located is also different; therefore, according to the color information of each spot in the image, Determine the light source corresponding to each spot in the N2 spots from the N1 light sources; one spot corresponds to one light source.
  • Figure 2b is a schematic diagram of an image taken for a game controller.
  • Figure 2b an image taken of the gamepad in Figure 1b is shown.
  • the image captured for the game controller contains 4 spots, that is, 4 light sources on the game controller are captured.
  • the color information of the four spots in the image captured by the game controller are: green, blue, indigo and purple respectively; then based on the color information of these four spots, the N1 light sources can be found.
  • the four spots correspond to light sources, that is, the light source corresponding to the green spot is light source 4, the light source corresponding to the blue spot is light source 5, the light source corresponding to the indigo spot is light source 6, and the light source corresponding to the purple spot is light source 7.
  • S203 Determine the pose of the target object based on the N2 spots and the light sources corresponding to the N2 spots.
  • the pose of the target object can be determined based on the positions of the N2 spots in the image and the positions of the light sources corresponding to the N2 spots on the target object. For example, 3 spots and 3 light sources corresponding to these 3 spots can be selected from N2 spots for positioning to determine the pose of the target object.
  • the pose of the target object may refer to the 6dof pose.
  • the target object is a game controller in a VR scene, the response time of the user's action can be reduced and the user's VR experience can be improved.
  • the target object is an object in the object special effects production scene, it can save time in making special effects videos.
  • Figure 3 is a schematic diagram illustrating an exemplary positioning process.
  • N1 light sources with different spectra are arranged on the target object.
  • the image contains N2 spots.
  • the N2 spots correspond to the N2 light sources.
  • N1 and N2 are both integers greater than 2.
  • N2 is less than or equal to N1.
  • S301 is similar to the above-mentioned S201 and will not be described again.
  • S302 Perform spot detection on the image to determine N2 spots.
  • spot detection can be performed on the image taken for the target object to be located, and the spots in the image that are smaller than the surrounding image can be detected. Areas with larger pixel grayscale values or smaller than surrounding grayscale values will be identified as spots.
  • spot detection can be performed based on differential methods of derivation (such as LoG (Laplacian of Gaussian) algorithm, DOG (Difference of Gaussian) algorithm, etc.); among them, this type of method is called a differential detector, that is, using filtering
  • This type of method is called a differential detector, that is, using filtering
  • the kernel is convolved with the image to find areas with the same characteristics as the filter kernel, that is, to find spots.
  • a watershed algorithm based on local extreme values can be used for spot detection.
  • S304 Find a preset relationship based on the color information of N2 spots to determine the light sources corresponding to the N2 spots from the N1 light sources.
  • a preset relationship may be established based on the color information of the N1 light sources and the N1 light sources.
  • color information may include color or RGB values.
  • a preset relationship can be established based on the light source identifiers of the N1 light sources and the colors of the N1 light sources.
  • the light source identifier is used to uniquely identify a light source.
  • the light source identification of these 14 light sources are: L1, L2,..., L14; the colors of these 14 light sources are: red, orange, yellow, green, blue, indigo Color, purple, gray, pink, brown, khaki, off-white, flesh color and dark green.
  • the established preset relationship can be shown in Table 1:
  • a preset relationship can be established based on the light source identifiers of the N1 light sources and the RGB values of the N1 light sources.
  • the light source identifiers of these 14 light sources are: L1, L2,..., L14; the RGB values of these 14 light sources are: (R255, G0, B0), (R255, G128,B0), (R255,G255,B0), (R0,G255,B0), (R0,G0,B255), (R0,G255,B255), (R255,G0,B255), (R96,G96, B96), (R255,G192,B203), (R128,G64,B0), (R195,G176,B145), (R248,G246B,231), (R249,G236,B228) and (R0,B87,B55) ; then the established preset relationship can be shown in Table 2:
  • the RGB value of each of the N2 spots can be determined. Then according to the RGB value of each spot, the color of each spot is determined; the color of each spot is used as the color information of each spot. For example, for each spot, the RGB value of the spot can be compared with the RGB values corresponding to various preset colors (including the colors of N1 light sources) to determine Color information for this spot.
  • the smaller the distance diff the more similar the colors of the two are; the larger the distance diff is, the less similar the colors of the two are.
  • the distance between the RGB value of the spot and the RGB value of various preset colors can be obtained.
  • the preset color with the smallest distance diff between the RGB value and the RGB value of the spot can be determined; use this preset color as the color information of the spot.
  • the preset relationship (such as Table 1) can be found based on the color information of the spot to determine the light source with the same color as the spot. In this way, the light sources corresponding to the N2 spots can be determined.
  • the preset relationship is established based on the light source identifiers of N1 light sources and the RGB values of the N1 light sources, then after determining the N2 spots contained in the image, the RGB value of each spot in the N2 spots can be determined. , and then determine the RGB value of the spot as the color information of the spot. Then, for each spot, the preset relationship (such as Table 2) can be found based on the color information of the spot, and the distance between the RGB value of the spot and the RGB value of each light source is calculated according to the above method; the light source with the smallest distance diff is Identifies the light source corresponding to the spot.
  • the preset relationship such as Table 2
  • S305 Determine the pose of the target object based on the N2 spots and the light sources corresponding to the N2 spots.
  • S305 may refer to the description of S203 above, which will not be described again here.
  • Figure 4a is a schematic diagram illustrating an exemplary positioning process.
  • N1 light sources with different spectra are arranged on the target object.
  • the image contains N2 spots.
  • the N2 spots correspond to the N2 light sources.
  • N1 and N2 are both integers greater than 2.
  • N2 is less than or equal to N1.
  • S402 Perform spot detection on the image to determine N2 spots.
  • S404 Find a preset relationship based on the color information of N2 spots to determine the light sources corresponding to the N2 spots from the N1 light sources.
  • S405 Determine the pixel coordinates of the center points corresponding to the N2 spots.
  • the gray center of gravity method can be used to calculate the pixel coordinates of the center point of each of the N2 spots.
  • x i represents the coordinate of the spot corresponding to the i-th row
  • y i represents the coordinate of the spot corresponding to the i-th column
  • f ij represents the pixel value of the spot in the i-th row and j-th column.
  • T is the grayscale value threshold, which can be set according to requirements. This application does not impose restrictions on this.
  • m is the total number of rows contained in the area corresponding to the spot
  • n is the total number of columns included in the area corresponding to the spot.
  • S406 Determine the pose of the target object based on the pixel coordinates of the center point corresponding to the N2 spots and the position layout information of the light source corresponding to the N2 spots.
  • the PNP (Perspective-n-Point) algorithm can be used to calculate the pose of the target object.
  • the PNP algorithm is a method for solving the motion of 3D to 2D point pairs.
  • the P3P (Perspective-3-Point) algorithm can be used, which is a method of using 3 positioning points to solve the motion of 3D to 2D point pairs.
  • 3 spots can be selected from the N2 spots, and then the target is calculated based on the pixel coordinates of the center points corresponding to the 3 spots and the position layout information of the 3 light sources corresponding to the 3 spots on the target object. The pose of the object.
  • Figure 4b is a schematic diagram of an exemplary pose solving method.
  • O is the optical center of the camera
  • A, B, and C are the three light sources on the target object.
  • the rectangular frame is the image taken of the target object.
  • a, b, and c are the three spots in the image, namely A , The projection of B and C on the imaging plane of the camera.
  • the position layout information of A, B, and C can refer to the coordinates of A, B, and C on the target object. These coordinates are coordinates in the world coordinate system, not the camera coordinate system; a, b, c
  • the pixel coordinates of the center point are the coordinates in the camera coordinate system.
  • ⁇ Oab corresponds to ⁇ OAB
  • ⁇ Obc corresponds to ⁇ OBC
  • ⁇ Oac corresponds to ⁇ OAC
  • the three cosine angles cos ⁇ a,b>, cos ⁇ b,c ⁇ , cos ⁇ a,b> can be determined.
  • the x and y in this formula are unknown and will change as the camera moves. Therefore, the P3P problem can finally be converted into a binary quadratic equation (polynomial equation) about x, y. This equation may have up to four solutions.
  • N2-3) spots among the N2 spots can be used as verification points.
  • These (N2-3) spots can form a verification point pair with the corresponding (N2-3) light sources.
  • the verification can be used Calculate the most likely solution for point pairs to obtain the 3D coordinates of A, B, and C in the camera coordinate system, which is the pose of the target object.
  • FIG. 5 is a schematic diagram of an exemplary positioning system.
  • the positioning system may include: a first device 501 and a game controller 502 connected to the first device 501; the first device 501 is a virtual reality VR device or an augmented reality AR device, and the first device 501 includes an image acquisition module 503; game controller layout There are N1 light sources 504 with different spectra, and N1 is an integer greater than 2.
  • the first device 501 can be used to obtain an image captured by the image acquisition module 503 for the game controller 502.
  • the image contains N2 spots, and the N2 spots correspond to N2 light sources.
  • N2 is an integer greater than 2, and N2 is less than or Equal to N1; based on the color information of the N2 spots contained in the image, determine the light sources corresponding to the N2 spots from the N1 light sources; determine the pose of the target object based on the N2 spots and the light sources corresponding to the N2 spots.
  • the first device 501 can be used to perform spot detection on the image, determine N2 spots, determine the color information of the N2 spots, and find a preset relationship based on the color information of the N2 spots to determine from the N1 light sources.
  • the N2 spots correspond to the light sources respectively; among them, one spot corresponds to one light source, and the preset relationship includes the relationship between the N1 light sources and the color information of the N1 light sources.
  • the first device 501 can be used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the position layout information of the light source corresponding to the N2 spots and the pixel coordinates of the center point corresponding to the N2 spots respectively.
  • the pose of the object can be used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the position layout information of the light source corresponding to the N2 spots and the pixel coordinates of the center point corresponding to the N2 spots respectively.
  • the pose of the object can be used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the position layout information of the light source corresponding to the N2 spots and the pixel coordinates of the center point corresponding to the N2 spots respectively.
  • the game handle 502 is equipped with N1 LED point light sources with different spectra.
  • the image acquisition module 503 includes a color filter corresponding to the spectra of N1 light sources.
  • the VR device is an all-in-one VR machine.
  • the AR device is an AR all-in-one machine.
  • FIG. 6 is a schematic diagram of an exemplary positioning system.
  • the positioning system may include a first device 601, a second device 602 connected to the first device 601, a visual module 603 connected to the second device 602, and a game controller 604 connected to the second device 602;
  • the first device 601 It is a terminal device, and the second device 602 is a virtual reality VR device or an augmented reality AR device;
  • the visual module 603 includes an image acquisition module 605, and the game controller 604 is equipped with N1 light sources 606 with different spectra, and N1 is an integer greater than 2.
  • the vision module 603 is used to call the image acquisition module 605 to capture an image for the game controller 604 and send the image to the second device.
  • the image contains N2 spots, and the N2 spots correspond to N2 light sources.
  • N2 is an integer greater than 2, and N2 Less than or equal to N1;
  • the second device 602 is used to forward the image to the first device 601;
  • the first device 601 is used to receive the image; determine the light sources corresponding to the N2 spots from the N1 light sources according to the color information of the N2 spots contained in the image; determine the target based on the N2 spots and the light sources corresponding to the N2 spots.
  • the pose of the object is used to determine the light sources corresponding to the N2 spots from the N1 light sources according to the color information of the N2 spots contained in the image; determine the target based on the N2 spots and the light sources corresponding to the N2 spots. The pose of the object.
  • the first device 601 can be used to perform spot detection on the image, determine N2 spots, determine the color information of the N2 spots, and find a preset relationship based on the color information of the N2 spots to determine from the N1 light sources.
  • the N2 spots correspond to the light sources respectively; among them, one spot corresponds to one light source, and the preset relationship includes the relationship between the N1 light sources and the color information of the N1 light sources.
  • the first device 601 can be used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the position layout information of the light source corresponding to the N2 spots and the pixel coordinates of the center point corresponding to the N2 spots respectively.
  • the pose of the object can be used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the position layout information of the light source corresponding to the N2 spots and the pixel coordinates of the center point corresponding to the N2 spots respectively.
  • the pose of the object can be used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively.
  • the image acquisition module 605 includes a color filter corresponding to the spectra of N1 light sources.
  • the VR device is VR glasses.
  • the AR device is AR glasses.
  • the game handle 604 is equipped with N1 LED point light sources with different spectra.
  • FIG. 7 is a schematic diagram of an exemplary positioning device.
  • the positioning device may include:
  • the image acquisition module 701 is used to acquire an image taken for a target object to be positioned.
  • N1 light sources with different spectra are arranged on the target object.
  • the image contains N2 spots.
  • the N2 spots correspond to the N2 light sources. Both N1 and N2 are An integer greater than 2, N2 is less than or equal to N1;
  • the light source determination module 702 is used to determine the light sources corresponding to the N2 spots from the N1 light sources according to the color information of the N2 spots contained in the image;
  • the positioning module 703 is used to determine the pose of the target object based on the N2 spots and the light sources corresponding to the N2 spots.
  • the light source determination module 702 is specifically used to perform spot detection on the image to determine N2 spots; determine the color information of the N2 spots; and find a preset relationship based on the color information of the N2 spots to determine from the N1 light sources.
  • the N2 spots correspond to the light sources respectively; among them, one spot corresponds to one light source, and the preset relationship includes the relationship between the N1 light sources and the color information of the N1 light sources.
  • the positioning module 703 is specifically used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the pixel coordinates of the center point corresponding to the N2 spots respectively, and the position layout information of the light source corresponding to the N2 spots.
  • the pose of the object is specifically used to determine the pixel coordinates of the center point corresponding to the N2 spots respectively; determine the target according to the pixel coordinates of the center point corresponding to the N2 spots respectively, and the position layout information of the light source corresponding to the N2 spots. The pose of the object.
  • the device can be applied in a virtual reality VR game scene or an augmented reality AR game scene, and the target object includes a game controller.
  • the light source is a point light source.
  • FIG. 8 shows a schematic block diagram of a device 800 according to an embodiment of the present application.
  • the device 800 may include: a processor 801 and a transceiver/transceiver pin 802, and optionally, a memory 803.
  • bus 804 includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
  • bus 804 includes, in addition to a data bus, a power bus, a control bus, and a status signal bus.
  • various buses are referred to as bus 804 in the figure.
  • the memory 803 may be used to store instructions in the foregoing method embodiments.
  • the processor 801 can be used to execute instructions in the memory 803, and control the receiving pin to receive signals, and control the transmitting pin to send signals.
  • the device 800 may be the electronic device or a chip of the electronic device in the above method embodiment.
  • This embodiment also provides a computer-readable storage medium.
  • Computer instructions are stored in the computer-readable storage medium.
  • the electronic device causes the electronic device to execute the above-mentioned related method steps to implement the above-mentioned embodiments. Positioning method.
  • This embodiment also provides a computer program product.
  • the computer program product When the computer program product is run on a computer, it causes the computer to perform the above related steps to implement the positioning method in the above embodiment.
  • inventions of the present application also provide a device.
  • This device may be a chip, a component or a module.
  • the device may include a connected processor and a memory.
  • the memory is used to store computer execution instructions.
  • the processor can execute computer execution instructions stored in the memory, so that the chip performs the positioning method in each of the above method embodiments.
  • the electronic devices, computer-readable storage media, computer program products or chips provided in this embodiment are all used to execute the corresponding methods provided above. Therefore, the beneficial effects they can achieve can be referred to the above provided The beneficial effects of the corresponding methods will not be described again here.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of modules or units is only a logical function division.
  • there may be other division methods for example, multiple units or components may be combined or can be integrated into another device, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • a unit described as a separate component may or may not be physically separate.
  • a component shown as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or it may be distributed to multiple different places. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application can be integrated into one processing unit, each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the above integrated units can be implemented in the form of hardware or software functional units.
  • Integrated units may be stored in a readable storage medium if they are implemented in the form of software functional units and sold or used as independent products.
  • the technical solutions of the embodiments of the present application are essentially or contribute to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium , including several instructions to cause a device (which can be a microcontroller, a chip, etc.) or a processor to execute all or part of the steps of the methods of various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code.
  • the steps of the methods or algorithms described in connection with the disclosure of the embodiments of this application can be implemented in hardware or by a processor executing software instructions.
  • Software instructions can be composed of corresponding software modules.
  • the software modules can be stored in random access memory (Random Access Memory, RAM), flash memory, read only memory (Read Only Memory, ROM), erasable programmable read only memory ( Erasable Programmable ROM (EPROM), Electrically Erasable Programmable Read-Only Memory (Electrically EPROM, EEPROM), register, hard disk, removable hard disk, CD-ROM or any other form of storage media well known in the art.
  • An exemplary storage medium is coupled to the processor to enable the processor to retrieve data from the storage Information can be read from a storage medium and information can be written to the storage medium.
  • the storage medium can also be an integral part of the processor.
  • the processor and storage media may be located in an ASIC.
  • Computer-readable media includes computer-readable storage media and communication media including any medium that facilitates transfer of a computer program from one place to another.
  • Storage media can be any available media that can be accessed by a general purpose or special purpose computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

本申请实施例提供了一种定位方法、系统及电子设备。该方法包括:首先,获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1;接着,根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;然后,根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。这样,通过一次定位计算就可以确定目标对象的位姿,节省了计算时间,提高了定位效率。

Description

定位方法、系统及电子设备
本申请要求于2022年07月21日提交中国国家知识产权局、申请号为202210858901.9、申请名称为“定位方法、系统及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及数据处理领域,尤其涉及一种定位方法、系统及电子设备。
背景技术
随着VR(Virtual Reality,虚拟现实)技术的发展,VR设备也逐渐进入用户视野;例如,VR眼镜、VR一体机等等;这些VR设备能够给用户观看电影、玩游戏等带来沉浸式体验,逐渐受到用户青睐。
目前,使用VR设备游戏时,需要配合游戏手柄使用,如6DOF(6degree of freedom,六自由度)游戏手柄。其中,游戏手柄定位速度,影响对用户动作响应的速度,也影响了用户游戏体验。因此如何实现游戏手柄的快速定位,成为亟待解决的问题之一。
发明内容
为了解决上述技术问题,本申请提供一种定位方法、系统及电子设备。该方法能够对目标对象进行快速定位。
第一方面,本申请实施例提供一种定位方法,该方法包括:首先,获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1;接着,根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;然后,根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。相对于现有技术需要进行多次(次)定位计算才能实现定位而言,本申请仅需进行一次定位计算即可确定目标对象的位姿,节省了计算时间,提高了定位效率。
示例性的,当目标对象为VR场景中的游戏手柄时,可以降低用户动作的响应时间,能够提高用户VR体验。当目标对象为物体特效制作场景中的物体时,可以节省制作特效视频的时间。
示例性的,光谱不同的N1个光源发射的光的波长不同,即N1个光源(或光源发射的光)的颜色信息不同。例如,N1=7,光源1、光源2、光源3、光源4、光源5、光源6和光源7的颜色信息分别是:红、橙、黄、绿、蓝、靛、紫。
示例性的,目标对象的位姿可以是6Dof位姿。
示例性的,可以获取针对待定位的目标对象拍摄的一张图像,然后采用本申请的定位方法,基于这一张图像确定一个位姿;接着,可以将该位姿确定为目标对象的位姿。
示例性的,可以获取针对待定位的目标对象拍摄的多张图像(这多张图像任意两张图像的时间差小于预设时长如100ms);然后采用本申请的定位方法,基于这多张图像中的每张图像确定一个位姿,可以得到多个位姿。接着,可以根据其他传感器(如IMU(Inertial Measurement Unit,惯性测量单元))采集的传感器信息,来从这多个位姿中选取最优的位姿,作为目标对象的位姿。
根据第一方面,根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应 的光源,包括:对图像进行斑点检测,确定N2个斑点;确定N2个斑点的颜色信息;基于N2个斑点的颜色信息查找预设关系,以从N1个光源中确定N2个斑点分别对应的光源;其中,一个斑点对应一个光源,预设关系包括N1个光源与N1个光源的颜色信息之间的关系。
示例性的,预设关系可以包括N1个光源的光源标识和N1个光源的颜色之间的关系;斑点的颜色信息可以是指颜色(可以根据斑点的RGB值与各种预设颜色的RGB值之间的距离,确定斑点的颜色)。这样,根据斑点的颜色查找预设关系,可以准确的从N1个光源中确定N2个斑点分别对应的光源。
示例性的,预设关系可以包括N1个光源的光源标识和N1个光源的RGB值之间的关系;斑点的颜色信息可以是指RGB值。这样,可以从预设关系中查找与斑点的RGB值距离最近的RGB值,将与斑点的RGB值距离最近的RGB值对应的光源,确定为斑点对应的光源;进而,可以准确的从N1个光源中确定N2个斑点分别对应的光源。
根据第一方面,或者以上第一方面的任意一种实现方式,根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿,包括:确定N2个斑点分别对应的中心点像素坐标;根据N2个斑点分别对应的中心点像素坐标,以及N2个斑点分别对应的光源的位置布设信息,确定目标对象的位姿。
示例性的,可以采用PNP(Perspective-n-Point)算法计算目标对象的位姿,PNP算法是求解3D到2D点对运动的方法。具体地,可以采用P3P(Perspective-3-Point)算法,即采用3个定位点求解3D到2D点对运动的方法,计算目标对象的位姿。
根据第一方面,或者以上第一方面的任意一种实现方式,该方法应用于虚拟现实VR游戏场景或增强现实AR游戏场景中,目标对象包括游戏手柄。
示例性的,该方法还可以应用于物体特效制作场景中,目标对象包括布设光谱不同的N1个光源的物体。
根据第一方面,或者以上第一方面的任意一种实现方式,光源为点光源。
示例性的,这N1个点光源可以是LED(light-emitting diode,发光二极管)点光源。
第二方面,本申请实施例提供一种定位系统,该系统包括:第一设备和与第一设备连接的游戏手柄;第一设备为VR设备或AR(Augmented Reality,增强现实)设备,第一设备包括图像采集模块;游戏手柄布设有光谱不同的N1个光源,N1为大于2的整数;
第一设备,用于获取图像采集模块针对游戏手柄拍摄的图像,图像包含N2个斑点,N2个斑点与N2个光源对应,N2为大于2的整数,N2小于或等于N1;根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
根据第二方面,图像采集模块包括与N1个光源的光谱对应的颜色滤镜color filter。
根据第二方面,或者以上第二方面的任意一种实现方式,VR设备为VR一体机;AR设备为AR一体机。
根据第二方面,或者以上第二方面的任意一种实现方式,游戏手柄布设有光谱不同的N1个LED(light-emitting diode,发光二极管)点光源。
第二方面以及第二方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第二方面以及第二方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第三方面,本申请实施例提供一种定位系统,该系统包括:第一设备、与第一设备连接的第二设备,与第二设备连接的视觉模组和游戏手柄;第一设备为终端设备,第二设备为VR设备或AR设备;视觉模组包括图像采集模块,游戏手柄布设有光谱不同的N1个光源,N1为大于2的整数;
视觉模组,用于调用图像采集模块针对游戏手柄拍摄图像,以及将图像发送至第二设备,N2个斑点与N2个光源对应,N2为大于2的整数,N2小于或等于N1;
第二设备,用于将图像转发至第一设备;
第一设备,用于接收图像,图像包含N2个斑点;根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
根据第三方面,图像采集模块包括与N1个光源的光谱对应的颜色滤镜color filter。
根据第三方面,或者以上第三方面的任意一种实现方式,VR设备为VR眼镜;AR设备为AR眼镜。
根据第三方面,或者以上第三方面的任意一种实现方式,游戏手柄布设有光谱不同的N1个发光二极管LED点光源。
第三方面以及第三方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第三方面以及第三方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第四方面,本申请实施例提供一种定位装置,该装置包括:
图像获取模块,用于获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1;
光源确定模块,用于根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;
定位模块,用于根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
第四方面以及第四方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第四方面以及第四方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第五方面,本申请实施例提供一种电子设备,包括:存储器和处理器,存储器与处理器耦合;存储器存储有程序指令,当程序指令由处理器执行时,使得电子设备执行第一方面或第一方面的任意可能的实现方式中的定位方法。
第五方面以及第五方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第五方面以及第五方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第六方面,本申请实施例提供一种芯片,包括一个或多个接口电路和一个或多个处理器;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,使得电子设备执行第一方面或第一方面的任意可能的实现方式中的定位方法。
第六方面以及第六方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式 相对应。第六方面以及第六方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第七方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质存储有计算机程序,当计算机程序运行在计算机或处理器上时,使得计算机或处理器执行第一方面或第一方面的任意可能的实现方式中的定位方法。
第七方面以及第七方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第七方面以及第七方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
第八方面,本申请实施例提供一种计算机程序产品,计算机程序产品包括软件程序,当软件程序被计算机或处理器执行时,使得计算机或处理器执行第一方面或第一方面的任意可能的实现方式中的定位方法。
第八方面以及第八方面的任意一种实现方式分别与第一方面以及第一方面的任意一种实现方式相对应。第八方面以及第八方面的任意一种实现方式所对应的技术效果可参见上述第一方面以及第一方面的任意一种实现方式所对应的技术效果,此处不再赘述。
附图说明
图1a为示例性示出的应用场景示意图;
图1b为示例性示出的游戏手柄的示意图;
图1c为示例性示出的应用场景示意图;
图2a为示例性示出的定位过程示意图;
图2b为示例性示出的针对游戏手柄拍摄的图像示意图;
图3为示例性示出的定位过程示意图;
图4a为示例性示出的定位过程示意图;
图4b为示例性示出的位姿求解方法示意图;
图5为示例性示出的定位系统示意图;
图6为示例性示出的定位系统示意图;
图7为示例性示出的定位装置示意图;
图8为示例性示出的装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
本申请实施例的说明书和权利要求书中的术语“第一”和“第二”等是用于区别不同的对象,而不是用于描述对象的特定顺序。例如,第一目标对象和第二目标对象等是用于区别不同的目标对象,而不是用于描述目标对象的特定顺序。
在本申请实施例中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请实施例中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其它实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
在本申请实施例的描述中,除非另有说明,“多个”的含义是指两个或两个以上。例如,多个处理单元是指两个或两个以上的处理单元;多个系统是指两个或两个以上的系统。
图1a为示例性示出的应用场景示意图。图1a为示例性示出的VR游戏场景。
示例性的,用户可以购买VR装备,使用VR装备进行VR游戏。其中,VR装备可以包括:VR设备(VR眼镜)、游戏手柄和视觉组件,如图1a所示。接着,可以将VR眼镜与手机通过数据线连接,将游戏手柄(2只)与VR眼镜连接,以及将视觉组件与VR眼镜连接。随后,用户可以按照图1a的方式,将手机佩戴在腰间,头戴VR眼镜以及手握游戏手柄,在启动游戏后,即可开始VR游戏。
应该理解的是,VR眼镜与手机也可以是无线连接,本申请对此不作限制。此外,VR眼镜也可以与其它终端设备如平板电脑、个人笔记本等连接,本申请对此不作限制。
应该理解的是,若将图1a中VR设备由VR眼镜替换为VR一体机,则VR一体机无需与终端设备连接,也无需与视觉组件连接;用户将游戏手柄与VR一体机连接后,头戴VR一体机以及手握游戏手柄,在启动游戏后,即可开始VR游戏。
应该理解的是,视觉组件和VR眼镜可以是相互独立的,或者,视觉组件是VR眼镜的一部分(即视觉组件与VR眼镜是一个整体),本申请对此不作限制。
应该理解的是,图1a示出的VR装备仅是本申请VR装备的一个示例,本申请对VR装备所包含的组件不作限制。
示例性的,在VR游戏过程中,需要对游戏手柄进行定位,以在游戏中跟随用户的手部动作,并根据用户的手部动作进行响应,实现游戏交互。
示例性的,为了实现对游戏手柄的快速定位,可以在游戏手柄上布设光谱不同的N1(N1为大于2的整数)个光源。在VR游戏过程中,可以由VR设备(如VR一体机)或视觉组件拍摄游戏手柄的图像;然后由VR设备(如VR一体机)或终端设备,根据游戏手柄上布设的光源和拍摄得到的图像中与光源对应的斑点,对游戏手柄进行定位。
图1b为示例性示出的游戏手柄的示意图。
参照图1b,示例性的,可以在游戏手柄的环形组件的外壁布设N1个光源(图1b中仅示出了7个光源:光源1、光源2、光源3、光源4、光源5、光源6和光源7)。需要说明的,在游戏手柄的环形组件的外壁布设的光源的方式可以包括多种,具体可以按照需求设置;即在游戏手柄的环形组件的外壁布设的光源的数量N1和N1个光源的位置,均可以按照需求布设,本申请对此不作限制。示例性的,可以以在各个角度拍摄到尽可能多的光源为目标,来确定游戏手柄上布设的光源的位置。
应该理解的是,图1b仅是游戏手柄上光源布局的一个示例,不代表游戏手柄的实际光源布设方式。
应该理解的是,本申请还可以应用在AR(Augmented Reality,增强现实)游戏场景,本申请对此不作限制。
需要说明的是,图1a仅是本申请应用场景的一个示例,本申请还可以应用于其他使用游戏手柄的VR/AR场景,如VR/AR运动场景等,本申请对此限制。
图1c为示例性示出的应用场景示意图。在图1c的实施例中,示出的是制作物体特效场景。
参照图1c,示例性的,在制作物体特效场景中,可以在物体(例如船)的部分位置(例如船头等)上布设光谱不同的N1个光源,如图1c中圆圈所示。在拍摄得到包含物体的视频后,可以根据物体上布设的光源和拍摄的视频图像中的光源对应的斑点,对物体的部分位置进行定位。
应该理解的是,本申请还可以包括其他应用场景,本申请对此不作限制。
本申请以对VR游戏场景中游戏手柄定位为示例进行示例性说明。
图2a为示例性示出的定位过程示意图。
S201,获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1。
示例性的,待定位的对象可以是VR游戏场景中布设光谱不同的N1个光源的游戏手柄,也可以 物体特效制作场景中布设光谱不同的N1个光源的物体等等,本申请对此不作限制。
示例性的,这N1个光源可以是N1个点光源。示例性的,这N1个点光源可以是LED(light-emitting diode,发光二极管)点光源。其中,N1个光源的光谱均不相同,也就是说,N1个光源发射的光的波长不同,即N1个光源(或光源发射的光)的颜色信息不同。
参照图1b,N1=7,光源1、光源2、光源3、光源4、光源5、光源6和光源7的颜色信息分别是:红、橙、黄、绿、蓝、靛、紫。
示例性的,光源会在相机的成像平面投影,进而在针对光源拍摄的图像中会拍摄到光源对应的斑点(Blob);因此针对待定位的目标对象拍摄的图像中可以包括多个斑点。由于在针对待定位的目标对象拍摄的过程中,可能仅拍摄到了目标对象上布设的部分光源(假设数量为N2,N2小于或等于,N2为大于2的整数);进而针对待定位的目标对象拍摄的图像中可以包括N2个斑点,这N2个斑点与N2个光源对应。
一种可能的方式中,可以由与VR设备(VR眼镜)连接的视觉组件,针对待定位的目标对象拍摄图像,然后将拍摄的图像通过VR设备发送给终端设备;由终端设备执行本申请的定位方法,即执行S201~S203。
一种可能的方式中,可以由VR设备(VR一体机)针对待定位的目标对象拍摄图像,由该VR设备执行本申请的定位方法,即执行S201~S203。
S202,根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源。
示例性的,由于N1个光源发射的光的颜色信息不同,进而针对待定位的目标对象拍摄的图像中N2个斑点的颜色信息也是不同的;因此可以根据该图像中每个斑点的颜色信息,从N1个光源中确定N2个斑点中每个斑点分别对应的光源;一个斑点与一个光源对应。
图2b为示例性示出的针对游戏手柄拍摄的图像示意图。在图2b中,示出的是针对图1b中的游戏手柄拍摄的图像。
参照图2b,示例性的,针对游戏手柄拍摄的图像包含4个斑点,也就是说,拍摄到了游戏手柄上的4个光源。示例性的,假设,针对游戏手柄拍摄的图像中4个斑点的颜色信息分别为:绿色、蓝色、靛色和紫色;则可以根据这4个斑点的颜色信息,从N1个光源中查找与4个斑点分别对应的光源,即绿色斑点对应的光源为光源4,蓝色斑点对应的光源为光源5、靛色斑点对应的光源为光源6,以及紫色斑点对应的光源为光源7。
S203,根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
示例性的,可以根据N2个斑点在图像中的位置,以及这N2个斑点分别对应的光源在目标对象上的位置进行定位,确定目标对象的位姿。示例性的,可以从N2个斑点中选取3个斑点和与这3个斑点对应的3个光源进行定位,确定目标对象的位姿。其中,目标对象的位姿可以是指6dof位姿。
这样,相对于现有技术需要进行多次(次)定位计算才能实现定位而言,本申请仅需进行一次定位计算即可确定目标对象的位姿,节省了计算时间,提高了定位效率。当目标对象为VR场景中的游戏手柄时,可以降低用户动作的响应时间,能够提高用户VR体验。当目标对象为物体特效制作场景中的物体时,可以节省制作特效视频的时间。
以下对确定与N2个斑点对应的N2个光源的过程进行具体说明。
图3为示例性示出的定位过程示意图。
S301,获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1。
示例性的,S301与上述S201类似,在此不再赘述。
S302,对图像进行斑点检测,确定N2个斑点。
示例性的,可以对针对待定位的目标对象拍摄的图像进行斑点检测,检测出该图像中比其周围像 素灰度值大或比周围灰度值小的区域,将检测出的这些区域,确定为斑点。
例如,可以基于求导的微分方法(例如LoG(高斯拉普拉斯)算法,DOG(高斯差分)算法等等),进行斑点检测;其中,这类的方法称为微分检测器,即使用滤波核与图像进行卷积,找出与滤波核具有相同特征的区域,即找出斑点。
例如,可以基于局部极值的分水岭算法,进行斑点检测。
应该理解的是,本申请还可以采用其他斑点检测算,本申请对此进行斑点检测的算法不作限制。
S303,确定N2个斑点的颜色信息。
S304,基于N2个斑点的颜色信息查找预设关系,以从所述N1个光源中确定所述N2个斑点分别对应的光源。
示例性的,在目标对象上布设光源的过程中,可以根据N1个光源与N1个光源的颜色信息,建立预设关系。示例性的,颜色信息可以包括颜色或RGB值。
一种可能的方式中,可以根据N1个光源的光源标识和N1个光源的颜色,建立预设关系。其中,光源标识用于唯一标识一个光源。
假设,N1为14,这14个光源的光源标识分别是:L1,L2,......,L14;这14个光源的颜色分别是:红色、橙色、黄色、绿色、蓝色、靛色、紫色、灰色、粉色、棕色、卡其色、米白色、肉色和墨绿色。则建立的预设关系可以如表1所示:
表1
一种可能的方式中,可以根据N1个光源的光源标识和N1个光源的RGB值,建立预设关系。
假设,N1为14,这14个光源的光源标识分别是:L1,L2,......,L14;这14个光源的RGB值分别是:(R255,G0,B0)、(R255,G128,B0)、(R255,G255,B0)、(R0,G255,B0)、(R0,G0,B255)、(R0,G255,B255)、(R255,G0,B255)、(R96,G96,B96)、(R255,G192,B203)、(R128,G64,B0)、(R195,G176,B145)、(R248,G246B,231)、(R249,G236,B228)和(R0,B87,B55);则建立的预设关系可以如表2所示:
表2
示例性的,若预设关系是根据N1个光源的光源标识和N1个光源的颜色建立的,则在确定图像包含的N2个斑点后,可以确定N2个斑点中的每一个斑点的RGB值。然后根据每个斑点的RGB值,确定每个斑点的颜色;将每个斑点的颜色,作为每个斑点的颜色信息。示例性的,针对每一个斑点,可以将该斑点的RGB值与各种预设颜色(包括N1个光源的颜色)对应的RGB值进行比对,来确定 该斑点的颜色信息。
例如,假设某一预设颜色的RGB值为:(R1,G1,B1),斑点的RGB值为:(R2,G2,B2);则可以按照下述公式计算两者RGB值之间的距离diff:
R3=(R1-R2)/256
G3=(G1-G2)/256
B3=(B1-B2)/256
diff=sqrt(R3*R3+G3*G3+B3*B3)
其中,距离diff越小,两者的颜色越相似;距离diff越大,两者的颜色越不相似。
这样,可以得到斑点的RGB值与各种预设颜色的RGB值之间的距离。可以确定RGB值与斑点的RGB值之间的距离diff最小的预设颜色;将该预设颜色,作为该斑点的颜色信息。接着,针对每个斑点,可以根据该斑点的颜色信息查找预设关系(如表1),确定与该斑点颜色相同的光源。这样,可以确定N2个斑点分别对应的光源。
示例性的,若预设关系是根据N1个光源的光源标识和N1个光源的RGB值建立的,则在确定图像包含的N2个斑点后,可以确定N2个斑点中的每一个斑点的RGB值,然后将斑点的RGB值,确定为斑点的颜色信息。接着,针对每个斑点,可以根据该斑点的颜色信息查找预设关系(如表2),按照上述方式计算该斑点RGB值与各光源的RGB值之间的距离;将距离diff最小的光源,确定为与斑点对应的光源。
S305,根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
示例性的,S305可以参照上述S203的描述,在此不再赘述。
以下对确定目标对象的位姿的过程进行具体说明。
图4a为示例性示出的定位过程示意图。
S401,获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1。
S402,对图像进行斑点检测,确定N2个斑点。
S403,确定N2个斑点的颜色信息。
S404,基于N2个斑点的颜色信息查找预设关系,以从所述N1个光源中确定所述N2个斑点分别对应的光源。
示例性的,S401~S404的可以参照上述S301~S304的描述,在此不再赘述。
S405,确定N2个斑点分别对应的中心点像素坐标。
一种可能的方式中,可以采用灰度重心法,计算N2个斑点中每个斑点的中心点像素坐标。其中,可以参照如下公式,计算一个斑点的中心点像素坐标(x0,y0):


其中,xi表示斑点对应在第i行坐标,yi表示斑点对应在第i列的坐标,fij表示斑点在第i行第j列的像素值。T为灰度值阈值,可以按照需求设置,本申请对此不作限制。m为斑点对应区域包含的总行数,n为斑点对应区域包含的总列数。
应该理解的是,还可以采用其他方式,确定N2个斑点分别对应的中心点像素坐标,本申请对此不作限制。
S406,根据N2个斑点分别对应的中心点像素坐标,以及N2个斑点分别对应的光源的位置布设信息,确定目标对象的位姿。
示例性的,可以采用PNP(Perspective-n-Point)算法计算目标对象的位姿,PNP算法是求解3D到2D点对运动的方法。具体地,可以采用P3P(Perspective-3-Point)算法,即采用3个定位点求解3D到2D点对运动的方法。进而,可以从N2个斑点中选取3个斑点,接着,根据这3个斑点分别对应的中心点像素坐标和与这3个斑点对应的3个光源在目标对象上的位置布设信息,来计算目标对象的位姿。
图4b为示例性示出的位姿求解方法示意图。在图4b中,O为相机光心,A、B、C为目标对象上的3个光源,矩形框为针对目标对象拍摄的图像,a、b、c为图像中的3个斑点,即A、B、C在相机的成像平面上的投影。其中,A、B、C的位置布设信息,可以是指A、B、C在目标对象上的坐标,该坐标是世界坐标系中的坐标,不是相机坐标系中的坐标;a、b、c的中心点像素坐标是相机坐标系中的坐标。
参照图4b,示例性的,三角形ΔOab,ΔOAB,ΔObc,ΔOBC,ΔOac,ΔOAC之间存在如下对应关系:ΔOab与ΔOAB对应,ΔObc与ΔOBC对应,ΔOac与ΔOAC对应。
对于三角形ΔOab和ΔOAB,利用余弦定理可得:
OA2+OB2-2OA*OB*cos<a,b>=AB2   (1)
对于三角形ΔObc和ΔOBC,利用余弦定理可得:
OB2+OC2-2OB*OC*cos<b,c>=BC2   (2)
对于三角形ΔOac和ΔOAC,利用余弦定理可得:
OA2+OC2-2OA*OC*cos<a,c>=AC2   (3)
将上述公式(1)、(2)和(3)均除以OC2,并且记x=OA/OC,y=OB/OC,可以得到:
x2+y2-2xycos<a,b>=AB2/OC2   (4)
y2+12-2ycos<b,c>=BC2/OC2   (5)
x2+12-2xcos<z,c>=AC2/OC2   (6)
记v=AB2/OC2,uv=BC2/OC2,wv=AC2/OC2,将上述公式(4)、(5)和(6)进行变换,可以得到:
x2+y2-2xycos<a,b>-v=0   (7)
y2+12-2ycos<b,c>-uv=0   (8)
x2+12-2xcos<z,c>-wv=0   (9)
示例性的,可以把公式(7)中的v放到等式一边,并将v代入公式(8)和(9)中,可以得到
(1-u)y2+ux2-ycos<b,c>+2uxycos<a,b>+1=0   (10)
(1-w)y2+wx2-xcos<b,c>+2wxycos<a,b>+1=0   (11)
由于上述确定了a、b、c的中心点像素坐标,则可以确定三个余弦角cos<a,b>,cos<b,c<,cos<a,b>。同时,u=BC2/AB2,w=AC2/AB2,可以根据A、B、C在目标对象上的位置布设信息(也就是世界坐标系下的坐标)计算得到,将其变换到相机坐标系下之后,并不改变这个比值,所以也是已知量。该式中的x,y是未知的,随着相机移动会发生变化。因此,P3P问题最终可以转换成关于x,y的一个二元二次方程(多项式方程)。该方程最多可能得到四个解。
示例性的,可以将N2个斑点中另外(N2-3)个斑点,作为验证点,这(N2-3)个斑点可以与对应的(N2-3)个光源组成验证点对,可以采用验证点对计算最可能的解,从而得到A、B、C在相机坐标系下的3D坐标,也就是目标对象的位姿。
图5为示例性示出的定位系统示意图。该定位系统可以包括:第一设备501和与第一设备501连接的游戏手柄502;第一设备501为虚拟现实VR设备或增强现实AR设备,第一设备501包括图像采集模块503;游戏手柄布设有光谱不同的N1个光源504,N1为大于2的整数。
示例性的,第一设备501,可以用于获取图像采集模块503针对游戏手柄502拍摄的图像,图像包含N2个斑点,N2个斑点与N2个光源对应,N2为大于2的整数,N2小于或等于N1;根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
示例性的,第一设备501,可以用于对图像进行斑点检测,确定N2个斑点;确定N2个斑点的颜色信息;基于N2个斑点的颜色信息查找预设关系,以从N1个光源中确定N2个斑点分别对应的光源;其中,一个斑点对应一个光源,预设关系包括N1个光源与N1个光源的颜色信息之间的关系。
示例性的,第一设备501,可以用于确定N2个斑点分别对应的中心点像素坐标;根据N2个斑点分别对应的光源的位置布设信息和N2个斑点分别对应的中心点像素坐标,确定目标对象的位姿。
示例性的,游戏手柄502布设有光谱不同的N1个LED点光源。
示例性的,图像采集模块503包括与N1个光源的光谱对应的颜色滤镜color filter。
示例性的,VR设备为VR一体机。
示例性的,AR设备为AR一体机。
图6为示例性示出的定位系统示意图。该定位系统可以包括第一设备601、与第一设备601连接的第二设备602,与第二设备602连接的视觉模组603,以及与第二设备602连接的游戏手柄604;第一设备601为终端设备,第二设备602为虚拟现实VR设备或增强现实AR设备;视觉模组603包括图像采集模块605,游戏手柄604布设有光谱不同的N1个光源606,N1为大于2的整数。
视觉模组603,用于调用图像采集模块605针对游戏手柄604拍摄图像,将图像发送至第二设备,图像包含N2个斑点,N2个斑点与N2个光源对应,N2为大于2的整数,N2小于或等于N1;
第二设备602,用于将图像转发至第一设备601;
第一设备601,用于接收图像;根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
示例性的,第一设备601,可以用于对图像进行斑点检测,确定N2个斑点;确定N2个斑点的颜色信息;基于N2个斑点的颜色信息查找预设关系,以从N1个光源中确定N2个斑点分别对应的光源;其中,一个斑点对应一个光源,预设关系包括N1个光源与N1个光源的颜色信息之间的关系。
示例性的,第一设备601,可以用于确定N2个斑点分别对应的中心点像素坐标;根据N2个斑点分别对应的光源的位置布设信息和N2个斑点分别对应的中心点像素坐标,确定目标对象的位姿。
示例性的,图像采集模块605包括与N1个光源的光谱对应的颜色滤镜color filter。
示例性的,VR设备为VR眼镜。
示例性的,AR设备为AR眼镜。
示例性的,游戏手柄604布设有光谱不同的N1个LED点光源。
图7为示例性示出的定位装置示意图。该定位装置可以包括:
图像获取模块701,用于获取针对待定位的目标对象拍摄的图像,目标对象上布设有光谱不同的N1个光源,图像包含N2个斑点,N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1;
光源确定模块702,用于根据图像包含的N2个斑点的颜色信息,从N1个光源中确定N2个斑点分别对应的光源;
定位模块703,用于根据N2个斑点和N2个斑点分别对应的光源,确定目标对象的位姿。
示例性的,光源确定模块702,具体用于对图像进行斑点检测,确定N2个斑点;确定N2个斑点的颜色信息;基于N2个斑点的颜色信息查找预设关系,以从N1个光源中确定N2个斑点分别对应的光源;其中,一个斑点对应一个光源,预设关系包括N1个光源与N1个光源的颜色信息之间的关系。
示例性的,定位模块703,具体用于确定N2个斑点分别对应的中心点像素坐标;根据N2个斑点分别对应的中心点像素坐标,以及N2个斑点分别对应的光源的位置布设信息,确定目标对象的位姿。
示例性的,该装置可以应用于虚拟现实VR游戏场景或增强现实AR游戏场景中,目标对象包括游戏手柄。
示例性的,光源为点光源。
一个示例中,图8示出了本申请实施例的一种装置800的示意性框图装置800可包括:处理器801和收发器/收发管脚802,可选地,还包括存储器803。
装置800的各个组件通过总线804耦合在一起,其中总线804除包括数据总线之外,还包括电源总线、控制总线和状态信号总线。但是为了清楚说明起见,在图中将各种总线都称为总线804。
可选地,存储器803可以用于存储前述方法实施例中的指令。该处理器801可用于执行存储器803中的指令,并控制接收管脚接收信号,以及控制发送管脚发送信号。
装置800可以是上述方法实施例中的电子设备或电子设备的芯片。
其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
本实施例还提供一种计算机可读存储介质,该计算机可读存储介质中存储有计算机指令,当该计算机指令在电子设备上运行时,使得电子设备执行上述相关方法步骤实现上述实施例中的定位方法。
本实施例还提供了一种计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述相关步骤,以实现上述实施例中的定位方法。
另外,本申请的实施例还提供一种装置,这个装置具体可以是芯片,组件或模块,该装置可包括相连的处理器和存储器;其中,存储器用于存储计算机执行指令,当装置运行时,处理器可执行存储器存储的计算机执行指令,以使芯片执行上述各方法实施例中的定位方法。
其中,本实施例提供的电子设备、计算机可读存储介质、计算机程序产品或芯片均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
通过以上实施方式的描述,所属领域的技术人员可以了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
本申请各个实施例的任意内容,以及同一实施例的任意内容,均可以自由组合。对上述内容的任意组合均在本申请的范围之内。
集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。
结合本申请实施例公开内容所描述的方法或者算法的步骤可以硬件的方式来实现,也可以是由处理器执行软件指令的方式来实现。软件指令可以由相应的软件模块组成,软件模块可以被存放于随机存取存储器(Random Access Memory,RAM)、闪存、只读存储器(Read Only Memory,ROM)、可擦除可编程只读存储器(Erasable Programmable ROM,EPROM)、电可擦可编程只读存储器(Electrically EPROM,EEPROM)、寄存器、硬盘、移动硬盘、只读光盘(CD-ROM)或者本领域熟知的任何其它形式的存储介质中。一种示例性的存储介质耦合至处理器,从而使处理器能够从该存 储介质读取信息,且可向该存储介质写入信息。当然,存储介质也可以是处理器的组成部分。处理器和存储介质可以位于ASIC中。
本领域技术人员应该可以意识到,在上述一个或多个示例中,本申请实施例所描述的功能可以用硬件、软件、固件或它们的任意组合来实现。当使用软件实现时,可以将这些功能存储在计算机可读介质中或者作为计算机可读介质上的一个或多个指令或代码进行传输。计算机可读介质包括计算机可读存储介质和通信介质,其中通信介质包括便于从一个地方向另一个地方传送计算机程序的任何介质。存储介质可以是通用或专用计算机能够存取的任何可用介质。
上面结合附图对本申请的实施例进行了描述,但是本申请并不局限于上述的具体实施方式,上述的具体实施方式仅仅是示意性的,而不是限制性的,本领域的普通技术人员在本申请的启示下,在不脱离本申请宗旨和权利要求所保护的范围情况下,还可做出很多形式,均属于本申请的保护之内。

Claims (17)

  1. 一种定位方法,其特征在于,所述方法包括:
    获取针对待定位的目标对象拍摄的图像,所述目标对象上布设有光谱不同的N1个光源,所述图像包含N2个斑点,所述N2个斑点与N2个光源对应,N1和N2均为大于2的整数,N2小于或等于N1;
    根据所述图像包含的N2个斑点的颜色信息,从所述N1个光源中确定所述N2个斑点分别对应的光源;
    根据所述N2个斑点和所述N2个斑点分别对应的光源,确定所述目标对象的位姿。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述图像包含的N2个斑点的颜色信息,从所述N1个光源中确定所述N2个斑点分别对应的光源,包括:
    对所述图像进行斑点检测,确定所述N2个斑点;
    确定所述N2个斑点的颜色信息;
    基于所述N2个斑点的颜色信息查找预设关系,以从所述N1个光源中确定所述N2个斑点分别对应的光源;
    其中,一个斑点对应一个光源,所述预设关系包括所述N1个光源与所述N1个光源的颜色信息之间的关系。
  3. 根据权利要求1或2所述的方法,其特征在于,所述根据所述N2个斑点和所述N2个斑点分别对应的光源,确定所述目标对象的位姿,包括:
    确定所述N2个斑点分别对应的中心点像素坐标;
    根据所述N2个斑点分别对应的中心点像素坐标,以及所述所述N2个斑点分别对应的光源的位置布设信息,确定所述目标对象的位姿。
  4. 根据权利要求1至3任一项所述的方法,其特征在于,
    所述方法应用于虚拟现实VR游戏场景或增强现实AR游戏场景中,所述目标对象包括游戏手柄。
  5. 根据权利要求1至4任一项所述的方法,其特征在于,
    所述光源为点光源。
  6. 一种定位系统,其特征在于,所述系统包括:第一设备和与所述第一设备连接的游戏手柄;所述第一设备为虚拟现实VR设备或增强现实AR设备,所述第一设备包括图像采集模块;所述游戏手柄布设有光谱不同的N1个光源,N1为大于2的整数;
    所述第一设备,用于获取所述图像采集模块针对所述游戏手柄拍摄的图像,所述图像包含N2个斑点,所述N2个斑点与N2个光源对应,N2为大于2的整数,N2小于或等于N1;根据所述图像包含的N2个斑点的颜色信息,从所述N1个光源中确定所述N2个斑点分别对应的光源;根据所述N2个斑点和所述N2个斑点分别对应的光源,确定所述目标对象的位姿。
  7. 根据权利要求6所述的系统,其特征在于,
    所述图像采集模块包括与所述N1个光源的光谱对应的颜色滤镜color filter。
  8. 根据权利要求6或7所述的系统,其特征在于,
    所述VR设备为VR一体机;
    所述AR设备为AR一体机。
  9. 根据权利要求6至8任一项所述的系统,其特征在于,所述游戏手柄布设有光谱不同的N1个发光二极管LED点光源。
  10. 一种定位系统,其特征在于,所述系统包括:第一设备、与所述第一设备连接的第二设备,与所述第二设备连接的视觉模组和游戏手柄;所述第一设备为终端设备,所述第二设备为虚拟现实VR设备或增强现实AR设备;所述视觉模组包括图像采集模块,所述游戏手柄布设有光谱不同的N1个光源,N1为大于2的整数;
    所述视觉模组,用于调用所述图像采集模块针对所述游戏手柄拍摄图像,以及将所述图像发送至所述第二设备,所述图像包含N2个斑点,所述N2个斑点与N2个光源对应,N2为大于2的整数,N2小于或等于N1;
    所述第二设备,用于将所述图像转发至所述第一设备;
    所述第一设备,用于接收所述图像;根据所述图像包含的N2个斑点的颜色信息,从所述N1个光源中确定所述N2个斑点分别对应的光源;根据所述N2个斑点和所述N2个斑点分别对应的光源,确定所述目标对象的位姿。
  11. 根据权利要求10所述的系统,其特征在于,
    所述图像采集模块包括与所述N1个光源的光谱对应的颜色滤镜color filter。
  12. 根据权利要求10或11所述的系统,其特征在于,
    所述VR设备为VR眼镜;
    所述AR设备为AR眼镜。
  13. 根据权利要求10至12任一项所述的系统,其特征在于,所述所述游戏手柄布设有光谱不同的N1个发光二极管LED点光源。
  14. 一种电子设备,其特征在于,包括:
    存储器和处理器,所述存储器与所述处理器耦合;
    所述存储器存储有程序指令,当所述程序指令由所述处理器执行时,使得所述电子设备执行权利要求1至权利要求5中任一项所述的定位方法。
  15. 一种芯片,其特征在于,包括一个或多个接口电路和一个或多个处理器;所述接口电路用于从电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,使得所述电子设备执行权利要求1至权利要求5中任一项所述的定位方法。
  16. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序运行在计算机或处理器上时,使得所述计算机或所述处理器执行如权利要求1至权利要求5中任一项所述的定位方法。
  17. 一种计算机程序产品,其特征在于,所述计算机程序产品包含软件程序,当所述软件程序被计算机或处理器执行时,使得权利要求1至5任一项所述的方法的步骤被执行。
PCT/CN2023/105343 2022-07-21 2023-06-30 定位方法、系统及电子设备 WO2024017045A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210858901.9 2022-07-21
CN202210858901.9A CN117462939A (zh) 2022-07-21 2022-07-21 定位方法、系统及电子设备

Publications (1)

Publication Number Publication Date
WO2024017045A1 true WO2024017045A1 (zh) 2024-01-25

Family

ID=89617004

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105343 WO2024017045A1 (zh) 2022-07-21 2023-06-30 定位方法、系统及电子设备

Country Status (2)

Country Link
CN (1) CN117462939A (zh)
WO (1) WO2024017045A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030199324A1 (en) * 2002-04-23 2003-10-23 Xiaoling Wang Apparatus and a method for more realistic shooting video games on computers or similar devices using visible or invisible light
US20030199325A1 (en) * 2002-04-23 2003-10-23 Xiaoling Wang Apparatus and a method for more realistic shooting video games on computers or similar devices using visible or invisible light and an input computing device
CN105844199A (zh) * 2016-03-30 2016-08-10 乐视控股(北京)有限公司 游戏枪在显示屏幕上瞄准位置的确定方法及装置
WO2022100288A1 (zh) * 2020-11-12 2022-05-19 海信视像科技股份有限公司 显示设备、手柄以及虚拟目标定位追踪的校准方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030199324A1 (en) * 2002-04-23 2003-10-23 Xiaoling Wang Apparatus and a method for more realistic shooting video games on computers or similar devices using visible or invisible light
US20030199325A1 (en) * 2002-04-23 2003-10-23 Xiaoling Wang Apparatus and a method for more realistic shooting video games on computers or similar devices using visible or invisible light and an input computing device
CN105844199A (zh) * 2016-03-30 2016-08-10 乐视控股(北京)有限公司 游戏枪在显示屏幕上瞄准位置的确定方法及装置
WO2022100288A1 (zh) * 2020-11-12 2022-05-19 海信视像科技股份有限公司 显示设备、手柄以及虚拟目标定位追踪的校准方法

Also Published As

Publication number Publication date
CN117462939A (zh) 2024-01-30

Similar Documents

Publication Publication Date Title
TW202201178A (zh) 低功率視覺追蹤系統
KR102125534B1 (ko) 카메라들의 수를 줄인 가상 현실 헤드 장착형 장치들 및 그 동작 방법들
US20220358663A1 (en) Localization and Tracking Method and Platform, Head-Mounted Display System, and Computer-Readable Storage Medium
EP3786894A1 (en) Method, device and apparatus for repositioning in camera orientation tracking process, and storage medium
CN108604379A (zh) 用于确定图像中的区域的系统及方法
CN107818290B (zh) 基于深度图的启发式手指检测方法
CN112527102A (zh) 头戴式一体机系统及其6DoF追踪方法和装置
US11798177B2 (en) Hand tracking method, device and system
US11493997B2 (en) Program, recognition apparatus, and recognition method
WO2017143745A1 (zh) 一种确定待测对象的运动信息的方法及装置
US11270443B2 (en) Resilient dynamic projection mapping system and methods
WO2021101097A1 (en) Multi-task fusion neural network architecture
WO2021007859A1 (zh) 一种人体姿态估计方法及装置
CN111427452B (zh) 控制器的追踪方法及vr系统
KR20230024901A (ko) 저전력 시각 추적 시스템들
WO2024017045A1 (zh) 定位方法、系统及电子设备
US11402932B2 (en) Information processing device, information processing method, and information processing system
US11436818B2 (en) Interactive method and interactive system
CN110059621A (zh) 手柄光球颜色的控制方法、装置及设备
US11900058B2 (en) Ring motion capture and message composition system
CN109126136B (zh) 三维虚拟宠物的生成方法、装置、设备及存储介质
WO2022148224A1 (zh) 手柄校正方法、电子设备、芯片及可读存储介质
US20230063078A1 (en) System on a chip with simultaneous usb communications
WO2021082736A1 (zh) 获取位姿信息、确定物体对称性的方法、装置和存储介质
WO2021035703A1 (zh) 跟踪方法和可移动平台

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842110

Country of ref document: EP

Kind code of ref document: A1