WO2019041147A1 - 光点识别方法、装置以及系统 - Google Patents

光点识别方法、装置以及系统 Download PDF

Info

Publication number
WO2019041147A1
WO2019041147A1 PCT/CN2017/099544 CN2017099544W WO2019041147A1 WO 2019041147 A1 WO2019041147 A1 WO 2019041147A1 CN 2017099544 W CN2017099544 W CN 2017099544W WO 2019041147 A1 WO2019041147 A1 WO 2019041147A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
spot
thres
threshold
Prior art date
Application number
PCT/CN2017/099544
Other languages
English (en)
French (fr)
Inventor
胡永涛
戴景文
贺杰
Original Assignee
广东虚拟现实科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广东虚拟现实科技有限公司 filed Critical 广东虚拟现实科技有限公司
Priority to CN201780007690.5A priority Critical patent/CN108701365B/zh
Priority to US16/314,444 priority patent/US10922846B2/en
Priority to PCT/CN2017/099544 priority patent/WO2019041147A1/zh
Publication of WO2019041147A1 publication Critical patent/WO2019041147A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6027Correction or control of colour gradation or colour contrast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6083Colour correction or control controlled by factors external to the apparatus
    • H04N1/6086Colour correction or control controlled by factors external to the apparatus by scene illuminant, i.e. conditions at the time of picture capture, e.g. flash, optical filter used, evening, cloud, daylight, artificial lighting, white point measurement, colour temperature
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker

Definitions

  • the present invention relates to the field of computer technologies, and in particular, to a method, an apparatus, and a system for identifying a light spot.
  • Virtual reality technology is not only a head-mounted device with virtual display effect, but also an important application direction in the fields of virtual reality (VR)/augmented reality (AR)/mixed reality (MR).
  • the interactive control technology has played a huge demand for the rapid development of the VR/AR/MR field.
  • the controller acts as an important hardware device for interactive control, providing strong support for interactive control.
  • the user can realize the human-computer interaction function by operating the controller's control keys (buttons, triggers, trackpads, etc.).
  • the current tracking and positioning of the controller is basically determined by optical methods, such as by infrared or adding a flashing spot.
  • optical methods such as by infrared or adding a flashing spot.
  • special equipment is required for infrared tracking.
  • a complete strobe period is required to identify the spot, and the frequency of the spot strobe needs to be precisely controlled.
  • An object of the embodiments of the present invention is to provide a method, an apparatus, and a system for identifying a light spot to solve the above problems.
  • a first aspect of the embodiments of the present invention provides a method for identifying a light spot, the method for identifying a light spot comprising: obtaining a first image corresponding to a light spot image, wherein the first image is the light spot image in a first color space Converting the first image into a second image, the second image being an image of the spot image in a second color space; according to a color recognition condition of the preset second color space, A spot of the target color is identified in the second image.
  • the color recognition condition includes a plurality of sets of threshold intervals, each set of threshold intervals corresponding to a given color, each set of threshold intervals including at least one color parameter in the second color space and a threshold interval corresponding to the given color .
  • identifying a spot of the target color in the second image according to a color recognition condition of the preset second color space comprises: color parameter of the second image and the plurality of groups The threshold intervals are compared, and the spot of the target color is determined based on the comparison result.
  • the identifying a light spot of the target color in the second image according to a color recognition condition of the preset second color space comprises: analyzing the second image in the second color space The distribution of each color parameter; according to the distribution, the color parameter and the threshold interval corresponding to the given color are adjusted.
  • the obtaining the first image corresponding to the spot image comprises: obtaining a raw image of the spot image acquired by the image sensor; and obtaining the spot image in the first color by processing the raw image An image of the space as the first image.
  • the color recognition condition according to the preset second color space, before identifying the light spot of the target color in the second image further comprising: filtering out the second image A noise point where the difference in spot size exceeds the difference threshold.
  • the first color space is an RGB space and the second color space is an HSV space.
  • each set of threshold intervals includes at least a threshold interval corresponding to the H parameter and the given color, and the threshold interval corresponding to the H parameter and the given color is H center -h thres ) ⁇ H ⁇ (H center +h Thres ), where H is the H parameter, H center is the H corresponding value preset for a given color, and h thres is the tolerance corresponding to a given color.
  • the method of determining the threshold interval corresponding to the H parameter and the given color comprises: converting different colors to the HSV space; normalizing the H parameters; determining a plurality of given colors and normalized Correspondence between the H parameters, the corresponding threshold interval is taken as the threshold interval corresponding to the H parameter and the given color.
  • the h thres corresponding to the solid color is less than the h thres corresponding to the color mixture.
  • the threshold value interval each set further comprises a S parameter threshold value interval of a given color corresponding, S parameter of a given color corresponding to the threshold value interval (s min_thres) ⁇ S ⁇ ( s max_thres), wherein S is the S parameter, s min_thres is the threshold for saturation preset for a given color, and smax_thres is the upper saturation threshold for a given color.
  • the threshold value interval each further include a V-parameter threshold value interval of a given color corresponding, V parameter of a given color corresponding to the threshold value interval (v min_thres) ⁇ V ⁇ ( v max_thres), wherein V is the V parameter, v min_thres is the threshold under the brightness preset for a given color, and v max_thres is the upper threshold of the brightness preset for a given color.
  • v min_thres and v max_thres provided the overall or local luminance of the second image.
  • a second aspect of the embodiments of the present invention provides a light spot recognition apparatus, including: an image acquisition module, configured to obtain a first image corresponding to a light spot image, wherein the first image is the light spot image in a first color space An image conversion module for converting the first image into a second image, the second image being an image of the spot image in a second color space; a spot recognition module for A color recognition condition of the preset second color space, in which the spot of the target color is recognized.
  • a third aspect of the embodiments of the present invention provides an image processing apparatus including a processor and a memory, wherein the processor is connected to the memory, the memory is used to store an instruction, and the processor is configured to execute the instruction, When the processor is executing the instructions, the aforementioned method may be performed in accordance with the instructions to effect processing of the image to identify a spot of the target color.
  • a fourth aspect of the embodiments of the present invention provides an image processing system including an image sensor and an image processing device, the image processing device including a processor and a memory, the processor connecting the memory and the image sensor,
  • the memory is configured to store instructions
  • the processor is configured to execute the instructions
  • the foregoing method may be performed according to the instructions to implement processing of images captured by the image sensor to identify The spot of the target color.
  • a fifth aspect of the embodiments of the present invention provides a terminal, including an image sensor, a memory, and a processor coupled to the image sensor and the memory, respectively, where the memory is used to store an instruction and an image collected by the image sensor.
  • the processor is configured to execute the instructions, and when the processor is executing the instructions, perform the foregoing method according to the instructions to implement processing the image collected by the image sensor to identify light of a target color point.
  • a sixth aspect of the embodiments of the present invention provides a tracking and positioning system, including a terminal and a controller for setting a point light source, wherein the terminal is provided with an image sensor, and the terminal is the foregoing terminal.
  • a seventh aspect of the embodiments of the present invention provides a tracking and positioning system, including an image sensor, a controller for setting a point light source, and an image processing device, wherein the image processing device respectively couples the image sensor and the controller, wherein The image processing device is the aforementioned image processing device.
  • An eighth aspect of the embodiments of the present invention provides a readable storage medium storing program code for spot recognition, the program code including instructions for performing the aforementioned method.
  • the color point of the second color space can be used to identify the light spot of the target color in the second image, and only the image information of the current frame is needed, and the recognition result can be output without delay;
  • the required equipment is simple, as long as the point light source of different colors is set on the controller, and the color recognition condition only needs to be modified according to different target color spots, no need to change the hardware device, and also supports multiple controllers together. use.
  • FIG. 1 is a schematic structural diagram of a positioning and tracking system according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for identifying a light spot according to an embodiment of the present invention
  • FIG. 4 is an exemplary schematic diagram of a raw view of a spot image in an embodiment of the present invention.
  • FIG. 5 is an exemplary schematic diagram of a first image in an embodiment of the present invention.
  • FIG. 6 is a schematic diagram showing a correspondence relationship between a normalized H parameter and a color in an embodiment of the present invention
  • FIG. 7 is an exemplary schematic diagram of a threshold interval corresponding to an H parameter and two colors in an embodiment of the present invention.
  • FIG. 8 is an exemplary schematic diagram of threshold intervals corresponding to H parameters and seven colors in an embodiment of the present invention.
  • Figure 9 is an H component of the first image shown in Figure 5 in the HSV space
  • Figure 10 is a V component of the first image shown in Figure 5 in the HSV space
  • Figure 11 is an S component of the first image shown in Figure 5 in the HSV space
  • Figure 12 is a blue light spot identified in Figure 5;
  • Figure 13 is a pink spot identified in Figure 5;
  • FIG. 14 is a structural block diagram of a light spot recognition apparatus according to an embodiment of the present invention.
  • FIG. 15 is a structural block diagram of a terminal according to an embodiment of the present invention.
  • horizontal simply means that its direction is more horizontal than “vertical”, and does not mean that the structure must be completely horizontal, but may be slightly inclined.
  • FIG. 1 is an exemplary block diagram of a tracking and positioning system according to an embodiment of the present invention.
  • tracking and positioning system 100 can include controller 120 , image sensor 140 , and image processing device 160 .
  • the controller 120 can be a gamepad with a point source, a pointing stick, a somatosensory mouse, and the like.
  • the number of controllers can be one or more.
  • the number of point sources on each controller can be one or more.
  • the point source can be a light source that illuminates the LED light from a point to the surrounding space.
  • the point light source may be an LED lamp formed of red, green, or blue solid color bead, or may be an LED lamp obtained by mixing different colors of the bead, for example, a purple or pink LED lamp may be used. It is obtained by mixing red and blue lamp beads.
  • the shape of the LED lamp can be many, for example, it can be a sphere, a hemisphere, a triangle, a pentagram, or the like.
  • the image of the controller 120 acquired by the image sensor 140 may include a spot image.
  • the spot image includes a background and a spot distributed over the background, and the spot is an image formed by a point source.
  • the image processing device 160 may track and locate the controller 120 based on the spot image on the controller 120 captured by the image sensor 140.
  • the embodiment of the present invention can directly track and locate the controller based on the spot image of the point light source on the controller collected by the image sensor, thereby avoiding the need for controlling the light source and the frequency of the light source.
  • the control requirements of the image processing apparatus can be reduced, and the image sensor does not need to be specially adjusted to match the tracking positioning of the controller.
  • Image sensor 140 may be any image sensing device capable of capturing an image of an object within its field of view for exposure imaging of a light source on controller 120 to obtain an original image.
  • image sensor 140 may not have a fixed location, for example, it may be worn by a user (eg, where the user's head belongs to a portion of the head mounted display device), and may be imaged as shown in FIG.
  • the sensor 140 is disposed on the head mounted display device as an example.
  • image sensor 140 can be placed in a fixed position, for example, it can be placed on a table or shelf.
  • Image sensor 140 may be configured to capture images of objects within its field of view at different locations.
  • the image sensor 140 may be a CMOS (Complementary Metal Oxide Semiconductor) sensor, a CCD (Charge-coupled Device) sensor, or the like.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • image sensor 140 can be configured to capture multiple images at different points in time over a period of time. For example, when controller 120 moves within the field of view of image sensor 140, image sensor 140 may capture images of controller 120 at different locations during the time period. Image sensor 140 can also obtain time information as each image is captured. Image sensor 140 can also time information Together with the image is sent to image processing device 160 for further processing. In an embodiment of the invention, image processing device 160 may be configured to track and locate controller 120 by identifying light spots included in the image.
  • the image processing device 160 first performs image processing on the original image to exclude the influence of the spot-like pattern formed by other objects. Of course, it can also be processed in the process of identifying the light spot.
  • image sensor 140 can communicate with image processing device 160 and transmit image data to image processing device 160.
  • the image sensor 140 may also receive a command signal from the image processing device 160 that sets parameters for capturing an image.
  • Exemplary parameters therein for capturing an image may include setting a time of exposure, aperture, image resolution/size, field of view (eg, zooming in and out), and/or color space of the image (eg, color or black and white) and / or parameters used to perform other types of known functions of the camera.
  • Image sensor 140 and controller 120 may be connected via a network connection, bus or other type of data link (e.g., hardwired, wireless (e.g., Bluetooth(TM)) or other connection known in the art.
  • the image processing device 160 may be an entity with good computing power such as an embedded processor, a digital image processor, a smart phone, a computer, a tablet, a notebook, and the like.
  • An image sensor may or may not be provided on the image processing apparatus.
  • Image processing device 160 may be configured to receive and process data/signals from other components of the system.
  • image processing device 160 may receive and process image data and/or input data from controller 120 from image sensor 140 as disclosed in the present disclosure.
  • Image processing device 160 may also transmit data/signals to other components of the system, and other components may perform certain functions based on data/signals from image processing device 160.
  • image processing device 160 can include a processor 161, a memory 162, and a communication interface 163.
  • Processor 161 may comprise any suitable type of general purpose or special purpose microprocessor, digital signal processor or microcontroller.
  • the processor 161 can be configured as a separate processor dedicated to locating the tracking object Module. Alternatively, the processor can be configured as a shared processor module for performing other functions unrelated to tracking objects.
  • the processor 161 can be configured to receive data and/or signals from various components of the system via, for example, a network. Processor 161 can also process data and/or signals to determine one or more operating conditions in the system. For example, processor 161 can receive an image from image sensor 140 and determine if the image includes an identification pattern, and processor 161 can also determine a landmark point included in the identification pattern. Additionally or alternatively, the processor 161 can determine the size and number of landmarks included in the identification pattern. The processor 161 can also determine the tracking target based on the determined size of the landmark points and/or the determined number of landmark points.
  • Memory 162 can include any suitable type of mass storage that provides any type of information for storing that the processor may need to operate.
  • the memory can be volatile or nonvolatile, magnetic, semiconductor, magnetic, optical, erasable, non-erasable or other type of storage device or tangible (ie, non-transitory) computer readable medium, including but not Limited to ROM, flash memory, dynamic RAM and static RAM.
  • Memory 162 can be configured to store one or more computer programs of exemplary object tracking positioning functions that can be executed by processor 161 and disclosed in the present disclosure. For example, memory 162 can be configured to store programs that are executable by processor 161.
  • Memory 162 can also be configured to store information and data used by processor 161.
  • memory 162 can be configured to store a lookup table that includes identification patterns and their corresponding parameters. If the identification pattern is known, the processor can determine the identity of the identification pattern by querying the lookup table.
  • Communication interface 163 can be configured to facilitate communication between controllers such as a network and other components of the system.
  • image processing device 160 can receive input data/signals from a controller via a communication interface to control characters in the game.
  • Image processing device 160 may also transmit data/signals to other displays for presenting games (images, video and/or sound signals) via communication interface 163.
  • the network may include or partially include any one or more of various networks or other types of communication connections known to those skilled in the art.
  • the network may include network connections, buses or other types of data links, such as hardwired or other connections known in the art.
  • the network may include the Internet, an intranet, a local area network or other wireless or other hardwired connection, or other connection means (eg, Bluetooth, WiFi, 4G, LTE cellular data network, etc.) through which components of the system communicate.
  • the image processing device 160 may be configured with a display device.
  • the display device can be part of a computer (eg, a screen of a laptop, etc.).
  • the display device It may be a display device (for example, LED, OLED or LCD) or the like separate from a stand-alone standard television, HDTV, digital television or any type of terminal such as a game console.
  • the spot recognition method is applied to the tracking and positioning system shown in FIG. 1. As shown in FIG. 3, the method may include:
  • Step S110 obtaining a first image corresponding to the light spot image, where the first image is an image of the light spot image in the first color space.
  • image sensor 140 can continuously capture images. Additionally or alternatively, the captured image may be triggered by a special event or data/signal transmitted from the terminal or controller 120. For example, the user can perform an opening operation at the input device of the controller 120. The controller 120 can transmit a signal for activating the image sensor 140 to capture one or more images based on the user's input. Alternatively, controller 120 can transmit input data to image processing device 160, which can activate image sensor 140 to capture one or more images.
  • the captured image may be triggered by the controller 120.
  • the system may also include a sensor for detecting an internal object within the field of view of image sensor 140, for example, an ultrasonic sensor may be used to detect one or more objects in the field of view of image sensor 140.
  • an ultrasonic sensor may be used to detect one or more objects in the field of view of image sensor 140.
  • the image sensor 140 can be activated to take a picture to obtain one or more images.
  • image sensor 140 sends the acquired raw image (eg, a raw map) directly to image processing device 160.
  • image sensor 140 may optionally process the captured raw image and send the processed image to image processing device 160.
  • image sensor 140 may convert the raw image to an RGB image for transmission to image processing device 160.
  • Image sensor 140 can also increase/decrease the contrast and/or brightness of the image.
  • image sensor 140 may receive parameters from image processing device 160 for capturing images.
  • Exemplary parameters for capturing an image may include setting exposure time, aperture, image resolution/size, viewing light field (zoom in and out), and/or color space of the image (eg, color or black and white) and/or A parameter used to perform other types of known functions of the camera.
  • the exposure parameter of the image sensor can be set to a low exposure parameter, such as 1/10 of normal exposure.
  • the image sensor 140 collects an original image (raw map) of the light spot image.
  • the number of light spots corresponds to the number of point light sources within the collection range of the image sensor 120, for example, When the number of point light sources in the range of the image sensor 140 is two, the number of light spots in the original image is also two.
  • the original image can be converted to an image within the color space after the original image of the spot image is obtained.
  • the color space can include color spaces such as RGB, LAB, YCbCr, and HSV.
  • Each color space includes at least three color parameters, for example, the RGB space includes an R value, a G value, and a B value, and the HSV space includes an H value, an S value, a V value, and the like.
  • image processing device 160 obtains a raw map of the spot image acquired by the image sensor, and processes the raw image to obtain an image of the spot image in the first color space as the first image. For example, the color information of an image can be reconstructed by a debayer or demosaic interpolation algorithm.
  • image sensor 140 directly converts the raw image of the spot image to a first image for transmission to image processing device 160.
  • the noise points that exceed the difference threshold value of the preset size in the use scene may be filtered out, that is, the light point is The preset size is filtered out compared to excessive or too small noise.
  • Step S120 converting the first image into a second image, where the second image is an image of the spot image in a second color space.
  • Images in different color spaces can be converted to each other, for example RGB images can be converted to YCbCr or HSV images.
  • the image processing device converts the first image into an image in the second color space.
  • the specific conversion algorithm may be based on the first color space, the second color space, and the first color space and the second color.
  • the relationship of space uses the existing color space conversion algorithm, which will not be described here.
  • image processing device 160 selectively processes the first image prior to conversion to increase efficiency. For example, image processing device 160 can adjust the image size to reduce the computational requirements in the method. Additionally or alternatively, the noise in the first image may be reduced, the image sharpened, and/or the contract and/or brightness of the image may be increased (or decreased) such that the spot in the identified image may be more easily detected. Of course, other types of image processing techniques are also contemplated.
  • Step S130 identifying a spot of the target color in the second image according to a preset color recognition condition of the second color space.
  • a color recognition condition may be set in advance, and the color recognition condition may include, for example, a plurality of sets of threshold intervals, each set of threshold intervals corresponding to a given color. For example, for a given color C1, C2, C3, ..., a corresponding set of threshold intervals are T1, T2, T3, ..., respectively.
  • the threshold intervals are similar, and different given colors correspond to different threshold intervals.
  • the number of multiple threshold intervals can also be designed according to actual needs. For example, the target color is two, as long as two threshold intervals are divided in the second color space. If the target color is seven, seven threshold intervals can be divided. Compared with the complicated equipment in the prior art, the point light source supporting multiple IDs can be easily extended, and only the threshold interval of the color space needs to be subdivided according to requirements.
  • each set of threshold intervals includes at least one color parameter in the second color space and a threshold interval corresponding to the given color. Assuming that the second color space includes color parameters A1, A2, and A3, the same set of threshold intervals includes at least a threshold interval corresponding to A1 and a given color. Of course, the set of threshold intervals may further include a threshold corresponding to a given color of A2 or A3. Interval.
  • the step of identifying a light spot of the target color in the second image according to a color recognition condition of the preset second color space may include: color parameter of the second image
  • the plurality of sets of threshold intervals are compared, and the spot of the target color is determined according to the comparison result. It is assumed that the threshold interval corresponding to the target color is T1, and the color of the second image is compared to T1 for comparison. If it falls within the range of T1, the spot of the target color can be determined. If it does not fall within the range of T1, it indicates that there is no target color. light spot.
  • the distribution of one or more color parameters of the second image in the second color space may be analyzed first, and then the color parameter and the threshold corresponding to the given color are adaptively adjusted according to the distribution. Interval.
  • the spot of the target color may be further divided.
  • the image processing device 160 can identify the spot of the target color in the second image by using the color recognition condition of the second color space, and only needs the image information of the current frame, and can output the recognition result without delay; further, the method needs
  • the device is simple, as long as the point light source of different colors is set on the controller Yes, and only need to modify the color recognition conditions according to different target color spots, no need to change the hardware device.
  • the image of the spot image in the HSV space is compared with a plurality of sets of threshold intervals of the preset HSV space, and the spot of the target color is determined according to the comparison result.
  • each set of threshold intervals may include a threshold interval corresponding to the H parameter and the given color.
  • the threshold interval may also increase the threshold interval corresponding to the S parameter and the given color, or the threshold interval corresponding to the V parameter and the given color, or a combination of the three.
  • the target color spot can be determined only if the color parameters satisfy the threshold intervals.
  • each set of threshold intervals includes at least a threshold interval corresponding to the H parameter and the given color, and the threshold interval corresponding to the H parameter and the given color is:
  • H is the H parameter
  • H center is the H corresponding value of the given color preset
  • h thres is the tolerance corresponding to the given color
  • the threshold interval corresponding to the H parameter and the given color can be obtained by the following method:
  • FIG. 6 is a schematic diagram of the correspondence between the normalized H parameters and the different colors.
  • the long bar of the color bars in FIG. 6 to FIG. 8 indicates that the H parameter of the HSV space increases from 0 to 1.
  • the color of the corresponding RGB space for example, may be: red, yellow, green, cyan, blue, purple, pink.
  • Figure 8 is only an example, and the specific range of a given color may be wider or narrower.
  • the threshold interval can be segmented according to the method of cutting the H parameter according to FIG. 7 .
  • the target color spot is a spot of more colors
  • the threshold interval corresponding to more colors can be expanded.
  • the h thres corresponding to the solid color may be smaller than the h thres corresponding to the color mixture.
  • each set of threshold intervals may further include a threshold interval corresponding to the given color of the S parameter, and the threshold interval corresponding to the given color of the S parameter is:
  • S is the S parameter
  • s min_thres is the threshold of saturation preset for a given color
  • s max_thres is the upper threshold of saturation preset for a given color
  • s min_thres and s max_thres may be adaptively adjusted according to the overall or local saturation of the second image.
  • each set of threshold intervals may further include a threshold interval corresponding to the V parameter and the given color, and the threshold interval corresponding to the V parameter and the given color is:
  • V is the V parameter
  • v min_thres is the threshold under the preset brightness of the given color
  • v max_thres is the upper threshold of the brightness preset for the given color.
  • v min_thres and v max_thres may be adaptively adjusted according to the overall or partial brightness of the second image.
  • equations (1), (2), and (3) can also be used to identify spots of different target colors.
  • the condition for identifying the color can be set in advance as:
  • a stricter threshold can be used for judgment, for example, the value of h thres can be reduced. It is also possible to appropriately expand the h thres corresponding to the color mixture to improve the influence of misidentification caused by the color unevenness, and different colors can set different h thres .
  • to reduce the solid may be suitably R / G / B corresponding to h thres, to improve the colors (e.g., yellow / green blue / purple / pink) corresponding to h thres.
  • FIG. 9 to 11 are the H/V/S components of the first image shown in FIG. 5 in the HSV space, respectively, in the process of identifying the light spot in FIG. 5 (assuming that FIG. 5 includes blue B and pink).
  • P two light spots respectively comparing the H/V/S component of the second image with the threshold interval corresponding to H/V/S and blue and pink in the plurality of sets of threshold intervals, and determining blue and pink according to the comparison result Light spot. If the H/V/S component of the second image falls within the threshold range corresponding to blue, the blue spot can be determined (as shown in FIG. 12) if the H/V/S component of the second image falls Within the threshold range corresponding to pink, the pink spot can be determined (as shown in Figure 13).
  • the present invention further provides a spot recognition device 400.
  • the spot recognition device 400 includes an image acquisition module 410, an image conversion module 420, and a spot recognition module 430.
  • the image obtaining module 410 is configured to obtain a first image corresponding to the light spot image, where the first image is an image of the light spot image in a first color space, and the image conversion module 420 is configured to use the first image. Converting into a second image, the second image is an image of the spot image in a second color space; the spot recognition module 430 is configured to determine a condition according to a color of the preset second color space, in the The spot of the target color is identified in the two images.
  • the color recognition condition includes a plurality of sets of threshold intervals, each set of threshold intervals corresponding to a given color, each set of threshold intervals including at least one color parameter in the second color space corresponding to the given color Threshold interval.
  • the image acquisition module 410 is further configured to obtain a raw image of the spot image acquired by the image sensor; and obtain an image of the spot image in the first color space as the first image.
  • the image conversion module 420 is further configured to filter out noise points in the first image or the second image that differ from the spot size by a difference threshold.
  • the spot recognition module 430 is further configured to compare the color parameter of the second image with the plurality of sets of threshold intervals, and determine a spot of the target color according to the comparison result.
  • the spot recognition module 430 can also be configured to analyze the second image in the first The distribution of each color parameter in the two color spaces; according to the distribution, the color parameter and the threshold interval corresponding to the given color are adjusted.
  • the first color space is an RGB space
  • the second color space is an HSV space.
  • Each set of threshold intervals includes at least a threshold interval corresponding to the H parameter and the given color, and the threshold interval corresponding to the H parameter and the given color is (H center -h thres ) ⁇ H ⁇ (H center +h thres ), wherein H is the H parameter, H center is the H corresponding value of the given color preset, and h thres is the tolerance corresponding to the given color.
  • the method for determining the threshold interval corresponding to the H parameter and the given color comprises: converting different colors into the HSV space; normalizing the H parameters; determining between the plurality of given colors and the normalized H parameters Correspondence relationship, the corresponding threshold interval is used as the threshold interval corresponding to the H parameter and the given color.
  • the h thres corresponding to the solid color is smaller than the h thres corresponding to the mixed color.
  • each set of threshold intervals further includes a threshold interval corresponding to the S parameter and the given color, and the threshold interval corresponding to the S color and the given color is (s min — thres ) ⁇ S ⁇ ( s max_thres ), where S is The S parameter, s min_thres is a threshold for saturation preset for a given color, and s max_thres is a threshold for saturation preset for a given color. s min_thres and s max_thres may be set according to the overall or partial saturation of the second image.
  • each set of threshold intervals further includes a threshold interval corresponding to the V parameter and the given color, and the threshold interval corresponding to the V parameter and the given color is (v min_thres ) ⁇ V ⁇ (v max_thres ), where V is V parameters, v min_thres the threshold value for the given color to a predetermined brightness, v max_thres to a preset threshold value on a given color brightness.
  • V min_thres may be provided and the overall v max_thres or local luminance of the second image.
  • the present invention further provides a terminal.
  • the terminal 500 may include an RF (Radio Frequency) circuit 510, a memory 520 including one or more computer readable storage media, and an input unit 530.
  • the terminal structure shown in FIG. 8 does not constitute a limitation to the terminal, and may include more or less components than those illustrated, or a combination of certain components, or different component arrangements. among them:
  • the RF circuit 510 can be used for receiving and transmitting signals during and after receiving or transmitting information, in particular, after receiving downlink information of the base station, and processing it by one or more processors 580; in addition, transmitting data related to the uplink to the base station.
  • RF circuitry 510 includes, but is not limited to, an antenna, at least one amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, Coupler, LNA (Low Noise Amplifier), duplexer, etc.
  • SIM Subscriber Identity Module
  • RF circuitry 510 can also communicate with the network and other devices via wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access). , Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (Short Messaging Service), and the like.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • e-mail Short Messaging Service
  • the memory 520 can be used to store software programs and modules, and the processor 580 executes various functional applications and data processing by running software programs and modules stored in the memory 520.
  • the memory 520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to The data created by the use of the terminal 500 (such as audio data, phone book, etc.) and the like.
  • memory 520 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, memory 520 may also include a memory controller to provide access to memory 520 by processor 580 and input unit 530.
  • Input unit 530 can be used to receive input numeric or character information, as well as to generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function controls.
  • input unit 530 can include touch-sensitive surface 531 as well as other input devices 532.
  • a touch-sensitive surface 531 also referred to as a touch display or trackpad, can collect touch operations on or near the user (eg, the user uses a finger, stylus, etc., any suitable object or accessory on the touch-sensitive surface 531 or The operation near the touch-sensitive surface 531) and driving the corresponding connecting device according to a preset program.
  • the touch-sensitive surface 531 can include two portions of a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 580 is provided and can receive commands from the processor 580 and execute them.
  • the touch sensitive surface 531 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 530 can also include other input devices 532. Specifically, other input devices 532 may include, but are not limited to, a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), One or more of a trackball, a mouse, a joystick, and the like.
  • Display unit 540 can be used to display information entered by the user or information provided to the user and various graphical user interfaces of terminal 500, which can be composed of graphics, text, icons, video, and any combination thereof.
  • the display unit 540 can include a display panel 541.
  • the display panel 541 can be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like.
  • the touch-sensitive surface 531 can cover the display panel 541, and when the touch-sensitive surface 531 detects a touch operation thereon or nearby, it is transmitted to the processor 580 to determine the type of the touch event, and then the processor 580 according to the touch event The type provides a corresponding visual output on display panel 541.
  • touch-sensitive surface 531 and display panel 541 are implemented as two separate components to implement input and input functions, in some embodiments, touch-sensitive surface 531 can be integrated with display panel 541 for input. And output function.
  • Terminal 500 may also include at least one other sensor than image sensor 140, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 541 according to the brightness of the ambient light, and the proximity sensor may close the display panel 541 when the terminal 500 moves to the ear. / or backlight.
  • the gravity acceleration sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • the terminal 500 can also be configured with gyroscopes, barometers, hygrometers, thermometers, infrared sensors and other sensors, here Let me repeat.
  • Audio circuit 560, speaker 561, and microphone 562 can provide an audio interface between the user and terminal 500.
  • the audio circuit 560 can transmit the converted electrical data of the received audio data to the speaker 561, and convert it into a sound signal output by the speaker 561.
  • the microphone 562 converts the collected sound signal into an electrical signal, and the audio circuit 560 is used by the audio circuit 560. After receiving, it is converted into audio data, and then processed by the audio data output processor 580, transmitted to the terminal, for example, via the RF circuit 510, or outputted to the memory 520 for further processing.
  • the audio circuit 560 may also include an earbud jack to provide communication of the peripheral earphones with the terminal 500.
  • WiFi is a short-range wireless transmission technology
  • the terminal 500 can help users to send and receive emails, browse web pages, and access streaming media through the WiFi module 570, which provides wireless broadband Internet access for users.
  • FIG. 15 shows the WiFi module 570, it can be understood that it does not belong to The necessary configuration of the terminal 500 can be omitted as long as it does not change the essence of the invention as needed.
  • Processor 580 is the control center of terminal 500, which connects various portions of the entire terminal using various interfaces and lines, by running or executing software programs and/or modules stored in memory 520, and recalling data stored in memory 520, The various functions and processing data of the terminal 500 are performed to perform overall monitoring of the mobile phone.
  • the processor 580 may include one or more processing cores; preferably, the processor 580 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 580.
  • the terminal 500 also includes a power source 590 (such as a battery) that supplies power to the various components.
  • a power source 590 such as a battery
  • the power source can be logically coupled to the processor 580 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • Power supply 590 may also include any one or more of a DC or AC power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
  • the terminal 500 may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the display unit of the terminal is a touch screen display.
  • the terminal also includes a memory, and one or more programs, wherein the one or more programs are stored in the memory and configured to execute, by the one or more processors, the one or more programs to include the a command: obtaining a first image corresponding to the light spot image, the first image being an image of the light spot image in a first color space; converting the first image into a second image, wherein the second image is An image of the spot image in the second color space; a spot of the target color is identified in the second image according to a preset color recognition condition of the second color space.
  • the embodiment of the present invention further discloses another motion tracking system, which is different from the motion tracking system shown in FIG. 1 in that the image sensor and the image processing device are integrated in the terminal.
  • the terminal may be a head-mounted display device, a smart phone, a notebook computer, a tablet computer, a smart wearable device, or the like.
  • the controller 120 can communicate with the terminal, typically by the user in one or both hands, to facilitate operation of the user input keys or the like on the controller 120.
  • the controller 120 can receive input from a user and transmit a signal to the terminal based on the received input, the terminal can process the signal and/or change the game based on the signal.
  • controller 120 can receive data/signals from the terminal for controlling its components.
  • the terminal may send an interaction request or the like, and the controller 120 may receive the interaction request and make corresponding feedback.
  • the user may open a function through an eye control terminal (for example, a head mounted display device), and the terminal sends a corresponding request to the controller. 120.
  • the controller 120 generates a vibration after receiving the request, and prompts the user to start the operation.
  • the method, device, and system provided by the embodiments of the present invention can identify the spot of the target color in the second image by using the color recognition condition of the second color space, and only need the image information of the current frame.
  • each block of the flowchart or block diagram can represent a module, a program segment, or a portion of code that includes one or more of the Executable instructions.
  • the functions noted in the blocks may also occur in a different order than those illustrated in the drawings. For example, two consecutive blocks may be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented in a dedicated hardware-based system that performs the specified function or action. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • each functional module in each embodiment of the present invention may be integrated to form a separate part, or each module may exist separately, or two or more modules may be integrated to form a separate part.
  • the functions, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
  • the technical solution of the present invention which is essential or contributes to the prior art, or a part of the technical solution, may be embodied in the form of a software product, which is stored in a storage medium, including
  • the instructions are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

一种光点识别方法、装置以及系统,所述光点识别方法包括:获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像(S110);将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像(S120);根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点(S130)。所述方法只需要当前帧的图像信息即可输出识别结果,且所述方法需要的设备简单,不需要对硬件设备进行改变。

Description

光点识别方法、装置以及系统 技术领域
本发明涉及计算机技术领域,尤其涉及一种光点识别方法、装置以及系统。
背景技术
虚拟现实技术不仅仅是提供具有虚拟显示效果的头戴式设备,交互控制技术也是虚拟现实(VR)/增强现实(AR)/混合现实(MR)等领域的重要应用方向。交互控制技术为VR/AR/MR领域的快速发展起了巨大的需求牵引作用。在VR/AR/MR领域,控制器(手柄)作为交互控制重要的硬件设备,为实现交互控制提供强有力的支撑。用户通过操控控制器的控制键(按键、扳机、触控板等等),能够实现人机交互功能。
为了增强用户的虚拟现实体验,目前对于控制器的追踪定位基本都是通过光学方法来确定,例如通过红外或增加闪烁光点等。但是,通过红外追踪需要专门的设备,增加闪烁光点的识别会有延迟,需要一个完整的频闪周期才能识别出光点,需要精确控制光点频闪的频率。
发明内容
本发明实施例的目的在于提供一种光点识别方法、装置以及系统,以解决上述问题。
本发明实施例第一方面提供了一种光点识别方法,该光点识别方法包括:获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像;将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像;根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
在一些实施例中,颜色识别条件包括多组阈值区间,每组阈值区间对应一个给定颜色,每组阈值区间包括第二颜色空间中的至少一个颜色参数与该给定颜色所对应的阈值区间。
在一些实施例中,根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点,包括:将所述第二图像的颜色参数与所述多组阈值区间进行比较,根据比较结果确定目标颜色的光点。
在一些实施例中,所述根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点,包括;分析所述第二图像在第二颜色空间中的各颜色参数的分布情况;根据所述分布情况,调节颜色参数与对应给定颜色的阈值区间。
在一些实施例中,所述获得光点图像对应的第一图像,包括:获得图像传感器采集的光点图像的raw图;对所述raw图处理获得所述光点图像在所述第一颜色空间的图像,作为所述第一图像。
在一些实施例中,所述根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点之前,还包括:滤除所述第二图像中与所述光点大小的差值超过差值阈值的杂点。
在一些实施例中,所述第一颜色空间为RGB空间,所述第二颜色空间为HSV空间。
在一些实施例中,每组阈值区间至少包括H参数与给定颜色所对应的阈值区间,H参数与给定颜色所对应的阈值区间为Hcenter-hthres)<H<(Hcenter+hthres),其中,H为H参数,Hcenter为给定颜色预设的H对应值,hthres为给定颜色对应的容忍度。
在一些实施例中,H参数与给定颜色所对应的阈值区间的确定方法包括:将不同颜色转换到HSV空间;对H参数进行归一化;确定多个给定颜色与归一化后的H参数之间的对应关系,将对应的阈值区间作为H参数与给定颜色所对应的阈值区间。
在一些实施例中,纯色对应的hthres小于混色对应的hthres
在一些实施例中,每组阈值区间还包括S参数与给定颜色所对应的阈值区间,S参数与给定颜色所对应的阈值区间为(smin_thres)<S<(smax_thres),其中,S为S参数,smin_thres为给定颜色预设的饱和度下阈值,smax_thres为给定颜色预设的饱和度上阈值。
在一些实施例中,根据所述第二图像的整体或局部饱和度设置smin_thres和smax_thres
在一些实施例中,每组阈值区间还包括V参数与给定颜色所对应的阈值区间,V参数与给定颜色所对应的阈值区间为(vmin_thres)<V<(vmax_thres),其中,V为 V参数,vmin_thres为给定颜色预设的亮度下阈值,vmax_thres为给定颜色预设的亮度上阈值。
在一些实施例中,根据所述第二图像的整体或局部亮度设置vmin_thres和vmax_thres
本发明实施例第二方面提供了一种光点识别装置,包括:图像获取模块,用于获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像;图像转换模块,用于将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像;光点识别模块,用于根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
本发明实施例第三方面提供了一种图像处理装置,包括处理器以及存储器,其中,所述处理器连接所述存储器,所述存储器用于存储指令,所述处理器用于执行所述指令,当所述处理器在执行所述指令时,可根据所述指令执行前述方法,以实现对图像进行处理以识别出目标颜色的光点。
本发明实施例第四方面提供了一种图像处理系统,包括图像传感器以及图像处理装置,所述图像处理装置包括处理器以及存储器,所述处理器连接所述存储器与所述图像传感器,所述存储器用于存储指令,所述处理器用于执行所述指令,当所述处理器在执行所述指令时,可根据所述指令执行前述方法,以实现对图像传感器采集的图像进行处理以识别出目标颜色的光点。
本发明实施例第五方面提供了一种终端,包括图像传感器、存储器以及分别与所述图像传感器、存储器耦合的处理器,所述存储器用于存储指令以及所述图像传感器采集到的图像,所述处理器用于执行所述指令,当所述处理器在执行所述指令时,可根据所述指令执行前述方法,以实现对所述图像传感器采集到的图像进行处理以识别出目标颜色的光点。
本发明实施例第六方面提供了一种跟踪定位系统,包括终端、设置有点光源的控制器,其中,所述终端设置有图像传感器,所述终端为前述终端。
本发明实施例第七方面提供了一种跟踪定位系统,包括图像传感器、设置有点光源的控制器以及图像处理装置,所述图像处理装置分别耦合所述图像传感器以及所述控制器,其中,所述图像处理装置为前述图像处理装置。
本发明实施例第八方面提供了一种可读存储介质,所述计算机可读存储介质存储了光点识别的程序代码,所述程序代码包括用于执行前述方法的指令。
本发明实施例,可以通过第二颜色空间的颜色识别条件,在第二图像中识别出目标颜色的光点,只需要当前帧的图像信息,即可输出识别结果,没有延迟;此外,该方法需要的设备简单,只要在控制器上设置不同颜色的点光源即可,而且根据不同的目标颜色光点只需要修改颜色识别条件,不需要对硬件设备进行改变,同时也支持多个控制器一起使用。
附图说明
图1是本发明实施例提供的一种定位跟踪系统的结构示意图;
图2是本发明实施例提供的一种图像处理装置的示意图;
图3是本发明实施例提供的一种光点识别方法的流程图;
图4是本发明实施例中光点图像的raw图的示例性示意图;
图5是本发明实施例中第一图像的示例性示意图;
图6是本发明实施例中H参数归一化后与颜色的对应关系示意图;
图7是本发明实施例中H参数与两种颜色对应的阈值区间的示例性示意图;
图8是本发明实施例中H参数与七种颜色对应的阈值区间的示例性示意图;
图9是图5所示的第一图像在HSV空间的H分量;
图10是图5所示的第一图像在HSV空间的V分量;
图11是图5所示的第一图像在HSV空间的S分量;
图12是在图5中识别出的蓝色光点;
图13是在图5中识别出的粉色光点;
图14是本发明实施例提供的一种光点识别装置的结构框图;
图15是本发明实施例提供的一种终端的结构框图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然, 所描述的实施例是本发明一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本发明实施例的组件可以以各种不同的配置来布置和设计。
因此,以下对在附图中提供的本发明的实施例的详细描述并非旨在限制要求保护的本发明的范围,而是仅仅表示本发明的选定实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
在本发明的描述中,需要说明的是,术语“中央”、“上”、“下”、“内”、“外”等指示的方位或位置关系为基于附图所示的方位或位置关系,或者是该发明产品使用时惯常摆放的方位或位置关系,仅是为了便于描述本发明和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本发明的限制。此外,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
此外,术语“水平”、“竖直”等术语并不表示要求部件绝对水平或竖直,而是可以稍微倾斜。如“水平”仅仅是指其方向相对“竖直”而言更加水平,并不是表示该结构一定要完全水平,而是可以稍微倾斜。
在本发明的描述中,还需要说明的是,除非另有明确的规定和限定,术语“设置”、“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接;可以是机械连接,也可以是电连接;可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本发明中的具体含义。以下将本发明通过参照实施例进行详细揭露,所列的示例将与附图结合进行说明。出于阅读方便的目的,在整个附图中将使用相同的附图标记来指代相同或相似的部件。
图1是本发明实施例提供的跟踪定位系统的示例性框图。如图1所示,在一些实施例中,跟踪定位系统100可以包括:控制器120、图像传感器140以及图像处理装置160。
控制器120可以是带有点光源的游戏手柄、指示棒、体感鼠标等等。控制器的数量可以是一个,也可以是多个。每个控制器上的点光源的数量可以是一个,也可以是多个。点光源可以为LED灯从一个点向周围空间发光的光源。
在一些实施例中,点光源可以是由红、绿、或蓝纯色灯珠形成的LED灯,也可以是通过不同颜色的灯珠混合得到的LED灯,例如:紫色或者粉红色LED灯均可以通过红色和蓝色灯珠混合得到。LED灯的形状可以很多,例如可以为球形、半球形、三角形、五角星形等。
图像传感器140采集的控制器120的图像可以包括光点图像。光点图像包括背景以及分布于背景的光点,光点就是由点光源形成的图像。图像处理装置160可以基于图像传感器140拍摄的控制器120上的光点图像对控制器120进行跟踪定位。
与现有的解决方案相比,本发明的实施例可以直接基于图像传感器采集的控制器上的点光源的光点图像对控制器进行跟踪定位,避免了对光源以及对光源频率进行控制的需要,可以降低对图像处理装置的控制要求,无需特别调整图像传感器以配合对控制器的跟踪定位。
图像传感器140可以是能够在其视场内捕获物体的图像的任何图像传感装置,用于对控制器120上的光源进行曝光成像以得到原始图像。在一些实施例中,图像传感器140可以不具有固定位置,例如,它可能被用户佩戴(例如,在用户头部属于头戴显示设备的一部分),并且可以随着用户移动,图1中以图像传感器140设置在头戴显示设备为例。在一些实施例中,图像传感器140可以设置在固定位置,例如,它可以放置在桌子或者或架子上。图像传感器140可以被配置为在不同位置捕获其视野内的对象的图像。
图像传感器140可以是CMOS(Complementary Metal Oxide Semiconductor,互补金属氧化物半导体)传感器,或者CCD(Charge-coupled Device,电荷耦合元件)传感器等等。
在一些实施例中,图像传感器140可以被配置为在一段时间内在不同时间点捕获多个图像。例如,当控制器120在图像传感器140的视野内移动时,图像传感器140可以在该时间段期间捕获控制器120在不同位置的图像。图像传感器140还可以在捕获每个图像时获得时间信息。图像传感器140还可以将时间信息 连同图像一起发送到图像处理装置160用于进一步处理。在本发明实施例中,图像处理装置160可以被配置用于通过识别包括在图像中的光点对控制器120进行跟踪和定位。
由于实际成像环境的影响,曝光成像后,除了对点光源以及拍摄所形成的图形外,原始图像中还会不可避免地存在环境中的其他物体(例如日光灯等)所形成的光斑状的图形。所以,在图像传感器140采集到原始图像后,也可以将原始图像发送给图像处理装置160,先由图像处理装置160对原始图像进行图像处理,以排除其他物体所形成的光斑状的图形的影响,当然,也可以在识别光点的过程中进行处理。
请再参考图1,图像传感器140可以与图像处理装置160通信并且将图像数据发送到图像处理装置160。图像传感器140还可以从图像处理装置160接收设置用于捕获图像的参数的命令信号。其中用于捕获图像的示例性参数可以包括用于设置曝光时间、孔径、图像分辨率/尺寸,视场(例如,放大和缩小)和/或图像的颜色空间(例如,彩色或黑白色)和/或用于执行相机的其他类型的已知功能的参数。图像传感器140和控制器120可以经由网络连接,总线或其他类型的数据链路(例如,硬线,无线(例如Bluetooth TM)或本领域已知的其他连接)来连接。
图像处理装置160可以是嵌入式处理器、数字图像处理器、智能手机、计算机、平板电脑、笔记本等具有良好计算能力的实体。图像处理装置上可以设置有图像传感器,也可以不设置图像传感器。
图像处理装置160可以被配置为从系统的其他部件接收和处理数据/信号。例如,本发明中所公开的,图像处理装置160可以从图像传感器140接收和处理图像数据和/或控制器120的输入数据。图像处理装置160还可以将数据/信号发送到系统的其他组件,并且其他组件可以基于来自图像处理装置160的数据/信号来执行某些功能。
请参见图2,在一些实施例中,图像处理装置160可以包括处理器161,存储器162和通信接口163。
处理器161可以包括任何适当类型的通用或专用微处理器、数字信号处理器或微控制器。处理器161可以被配置为专用于定位跟踪对象的单独的处理器 模块。或者,处理器可以被配置为用于执行与跟踪对象无关的其他功能的共享处理器模块。处理器161可以被配置为经由例如网络从系统的各种组件接收数据和/或信号。处理器161还可处理数据和/或信号以确定系统中的一个或多个操作条件。例如,处理器161可以从图像传感器140接收图像并且确定图像是否包括识别图案,处理器161还可以确定包含在识别图案中的标志点。作为附加或替代,处理器161可以确定包括在识别图案中的标志点的大小和数量。处理器161还可以基于所确定的标志点的大小和/或所确定的标志点数量来确定跟踪目标。
存储器162可以包括提供用于存储处理器可能需要操作的任何类型的信息的任何适当类型的大容量存储器。存储器可以是易失性或非易失性、磁性、半导体、磁带、光学、可擦除、不可擦除或其他类型的存储设备或有形(即,非暂时性)计算机可读介质,包括但不限于ROM,闪速存储器,动态RAM和静态RAM。存储器162可以被配置为存储可以由处理器161执行的且在本发明中公开的示例性对象跟踪定位功能的一个或多个计算机程序。例如,存储器162可以被配置为存储可由处理器161执行的程序。
存储器162还可以被配置为存储由处理器161使用的信息和数据。例如,存储器162可以被配置为存储包括识别图案和它们对应的参数的查找表。如果获知识别图案,处理器可以通过查询查找表来确定识别图案的身份。
通信接口163可以被配置为便于通过诸如网络的控制器和系统的其他组件之间的通信。例如,图像处理装置160可以经由通信接口从控制器接收输入数据/信号,以控制游戏中的角色。图像处理装置160还可以经由通信接口163将数据/信号传送到用于呈现游戏(图像,视频和/或声音信号)的其他显示器。
网络可以包括或部分包括本领域技术人员已知的各种网络或其他类型的通信连接中的任何一种或多种。网络可以包括网络连接,总线或其他类型的数据链路,例如本领域已知的硬线或其他连接。例如,网络可以包括互联网,内联网,局域网或其它无线或其他硬连线,或者其它连接方式(例如,蓝牙,WiFi,4G,LTE蜂窝数据网络等),系统的组件通过网络实现通信。
图像处理装置160可以配置有显示设备。在一些实施例中,显示设备可以是计算机的一部分(例如笔记本电脑的屏幕等)。在一些实施例中,显示设备 可以是与诸如独立标准电视,HDTV,数字电视或任何类型的终端(例如游戏主机)分离的显示设备(例如,LED,OLED或LCD)等。
下面将结合附图说明基于该控制器120上点光源的一种光点识别方法。该光点识别方法应用于图1所示的跟踪定位系统,如图3所示,该方法可以包括:
步骤S110,获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像。
在一些实施例中,图像传感器140可以连续捕获图像。作为附加方式或替代方式,拍摄图像可以由特殊事件或从终端或控制器120发送的数据/信号所触发。例如,用户可以在控制器120的输入装置进行开启操作。控制器120可以传送用于启动图像传感器140的信号,以基于用户的输入捕获一个或多个图像。或者,控制器120可以向图像处理装置160发送输入数据,图像处理装置160可以启动图像传感器140以捕获一个或多个图像。
在一些游戏事件中,可以通过控制器120触发拍摄图像。作为附加方式或替代方式,系统还可以包括用于在图像传感器140的视场内检测内部物体的传感器,例如,可以使用超声波传感器来检测图像传感器140的视场中的一个或多个物体。在本实施例中,如果检测到物体,则可以启动图像传感器140进行拍照,获得一个或多个图像。
在一些实施例中,图像传感器140将获取到的原始图像(例如raw图)直接发送图像处理装置160。在一些实施例中,图像传感器140可以可选地处理所捕获的原始图像并将经处理的图像发送到图像处理装置160。例如,图像传感器140可以将原始图像转换为RGB图像发送到图像处理装置160。图像传感器140还可以增加/减少图像的对比度和/或亮度。
在一些实施例中,图像传感器140可以从图像处理装置160接收用于捕获图像的参数。用于捕获图像的示例性参数可以包括用于设置曝光时间、光圈、图像分辨率/尺寸、观看光场(放大和缩小),和/或图像的颜色空间(例如,颜色或黑白)和/或用于执行相机的其他类型的已知功能的参数。例如,在一些实施例中,为了滤除环境中相近颜色物体,可以设置图像传感器的曝光参数为低曝光参数,例如正常曝光的1/10。
在该步骤中,图像传感器140采集到光点图像的原始图像(raw图),在光点图像中,光点的数量和图像传感器120采集范围内的点光源的个数一一对应,例如,图像传感器140采集范围内的点光源的个数为2个,则原始图像中光点的数量也为2个。
在获得光点图像的原始图像后可以将原始图像转换为其在颜色空间内的图像。颜色空间可以包括RGB,LAB,YCbCr,HSV等颜色空间。每个颜色空间包括至少三个颜色参数,例如,RGB空间包括R值、G值、B值,HSV空间包括H值、S值、V值等。
在一些实施例中,图像处理装置160获得图像传感器采集的光点图像的raw图,对raw图处理获得光点图像在第一颜色空间的图像,作为第一图像。例如可以通过debayer或demosaic插值算法重建图像的色彩信息,
在另一些实施例中,图像传感器140直接将光点图像的raw图转换为第一图像发送给图像处理装置160。
可选的,为了避免误识别,可以在原始图像或者第一图像中,将与光点在使用场景中的预设大小的差值超过差值阈值的杂点滤除,也就是将与光点的预设大小相比过大或过小的杂点滤除。
步骤S120,将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像。
不同颜色空间中的图像可以相互转换,例如RGB图像可以转换为YCbCr或者HSV图像。图像处理装置将第一图像转换为其在第二颜色空间中的图像,在本发明实施例中,具体的转换算法可以根据第一颜色空间、第二颜色空间以及第一颜色空间与第二颜色空间的关系采用现有的颜色空间转换算法,这里不再赘述。
在一些实施例中,图像处理装置160在进行转换前选择性地处理第一图像,以便提高效率。例如,图像处理装置160可以调整图像大小,从而减少该方法中的计算需求。附加地或替代地,可以降低第一图像中的噪声,锐化图像,和/或增加(或减小)图像的合约和/或亮度,使得识别图像中的光点可以更容易检测。当然,也可以考虑其它类型的图像处理技术。
步骤S130,根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
针对每个颜色空间,可以预先设置颜色识别条件,颜色识别条件例如可以包括多组阈值区间,每组阈值区间对应一个给定颜色。例如对于给定颜色C1、C2、C3…,对应的一组阈值区间分别为T1、T2、T3…。一般情况下,如果给定颜色相近,阈值区间也相近,不同给定颜色对应不同的阈值区间。多组阈值区间的数量也可以根据实际需要进行设计,例如目标颜色为两个,只要在第二颜色空间划分出两组阈值区间即可,如果目标颜色为七个,可以划分七组阈值区间,相对于现有技术中复杂的设备,可以很容易的扩展出支持多种ID的点光源,只需要根据需求细分颜色空间的阈值区间即可。
在本实施例中,每组阈值区间包括第二颜色空间中的至少一个颜色参数与该给定颜色所对应的阈值区间。假设第二颜色空间包括颜色参数A1、A2、A3,那么同一组阈值区间至少包括A1与给定颜色对应的阈值区间,当然,该组阈值区间还可以包括A2或A3与给定颜色对应的阈值区间。
在一些实施例中,根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点的步骤可以包括:将所述第二图像的颜色参数与所述多组阈值区间进行比较,根据比较结果确定目标颜色的光点。假设目标颜色对应的阈值区间为T1,将第二图像的颜色参与T1进行比较,如果落在T1范围内,则可以确定目标颜色的光点,如果没有落在T1范围内,表示没有目标颜色的光点。
在一些实施例中,可以先分析所述第二图像在第二颜色空间中的某一个或者多个颜色参数的分布情况,然后根据分布情况,自适应的调节颜色参数与对应给定颜色的阈值区间。
进一步的,根据实际需要在第二图像中识别出目标颜色的光点后还可以将不同目标颜色的光点进行分割。
图像处理装置160可以通过第二颜色空间的颜色识别条件,在第二图像中识别出目标颜色的光点,只需要当前帧的图像信息,即可输出识别结果,没有延迟;此外,该方法需要的设备简单,只要在控制器上设置不同颜色的点光源 即可,而且根据不同的目标颜色光点只需要修改颜色识别条件,不需要对硬件设备进行改变。
下面以第一颜色空间为RGB空间,第二颜色空间为HSV空间为例,对上述方法进行详细的说明。
首先获得光点图像的原始图像(如图4所示),然后获得原始图像在RGB空间中的图像(如图5所示),然后将RGB空间中的图像转换为HSV空间中的图像,具体的转换方法可以采用现有的转换算法,这里不再赘述。
将光点图像在HSV空间中的图像与预设的HSV空间的多组阈值区间进行比较,根据比较结果确定目标颜色的光点。
在HSV空间中,H代表的是色调(Hue),S代表的是饱和度(Saturation),V代表的是亮度(Value)。因此,在环境中的其他物体很难与点光源成像类似的情况下,每组阈值区间只要包括H参数与给定颜色所对应的阈值区间即可,当然,为了增加识别的准确度,每组阈值区间也可以增加S参数与给定颜色所对应的阈值区间,或者V参数与给定颜色所对应的阈值区间,或者三者的组合。在两个或两个颜色参数对应的阈值区间组合的情况下,只有当颜色参数都满足这些阈值区间的情况下才能确定是目标颜色光点。
在一些实施例中,每组阈值区间至少包括H参数与给定颜色所对应的阈值区间,H参数与给定颜色所对应的阈值区间为:
(Hcenter-hthres)<H<(Hcenter+hthres),      (1)
其中,H为H参数,Hcenter为给定颜色预设的H对应值,hthres为给定颜色对应的容忍度。
H参数与给定颜色对应的阈值区间可以通过以下方法获得:
将不同颜色转换到HSV空间,对H参数进行归一化处理,然后确定RGB的给定颜色与归一化后的H参数之间的对应关系,将对应的阈值区间作为H参数与给定颜色所对应的阈值区间。请参见图6,图6为H参数归一化后的与不同颜色之间对应关系的示意图,图6至图8中的长条形的颜色条表示当HSV空间的H参数由0向1增加时,所对应的RGB空间的颜色,例如,可以依次为:红(red)、黄(yellow)、绿(green)、青(cyan)、蓝(blue)、紫(purple)、粉(pink),当然,图8只是举例,具体的某个给定颜色的范围可以更宽或者更窄。
请参见图7,若目标颜色光点只有两个颜色,例如,只需要识别绿(green)、蓝(blue)两种颜色的光点,可以按照图7切割H参数的方式进行阈值区间的分割。
若目标颜色光点为更多颜色的光点,可以通过细分H参数来支持更多的颜色,例如,若目标颜色光点有七种颜色,则可以参照图8细分H参数,同理可以扩展出更多颜色对应的阈值区间。
在一些实施例中,纯色对应的hthres可以小于混色对应的hthres
在一些实施例中,每组阈值区间还可以包括S参数与给定颜色所对应的阈值区间,S参数与给定颜色所对应的阈值区间为:
(smin_thres)<S<(smax_thres),         (2)
其中,S为S参数,smin_thres为给定颜色预设的饱和度下阈值,smax_thres为给定颜色预设的饱和度上阈值。
进一步的,可以根据所述第二图像的整体或局部饱和度自适应的调整smin_thres和smax_thres
在一些实施例中,每组阈值区间还可以包括V参数与给定颜色所对应的阈值区间,V参数与给定颜色所对应的阈值区间为:
(vmin_thres)<V<(vmax_thres),         (3)
其中,V为V参数,vmin_thres为给定颜色预设的亮度下阈值,vmax_thres为给定颜色预设的亮度上阈值。
进一步的,可以根据第二图像的整体或局部亮度自适应的调整vmin_thres和vmax_thres
当然,也可以利用公式(1)、(2)、(3)的组合来识别出不同目标颜色的光点。
以目标颜色为图8中的cyan颜色为例,可以预先设定识别该颜色的条件为:
Hcenter=0.5,
hthres=1/12=0.0833,
smin_thres=0.15,smax_thres=1.0,
vmin_thres=0.25,vmax_thres=1.0。
当然,这只是个示例,在具体设置过程中,为增加识别光点的稳定性和抗 干扰性,可以利用更严格的阈值进行判断,例如可以缩小hthres的值。还可以适当扩大混色对应的hthres来提升颜色不均匀带来的误识别的影响,不同颜色可以设置不同的hthres。例如,可以适当减小纯色R/G/B对应的hthres,而提高混色(例如,黄色/蓝绿色/紫色/粉红色)对应的hthres
图9至图11分别是图5所示的第一图像在HSV空间的H/V/S分量,在对图5中的光点进行识别的过程中(假设图5中包括蓝色B和粉色P两个光点),分别将第二图像的H/V/S分量与多组阈值区间中H/V/S与蓝色和粉色对应的阈值区间进行比较,根据比较结果确定蓝色和粉色的光点。如果第二图像的H/V/S分量落在蓝色对应的阈值范围内,则可以确定蓝色的光点(如图12所示),如果第二图像的H/V/S分量落在粉色对应的阈值范围内,则可以确定粉色的光点(如图13所示)。
请参阅图14,本发明还提供了一种光点识别装置400,光点识别装置400包括:图像获取模块410、图像转换模块420和光点识别模块430。其中,图像获取模块410用于获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像;图像转换模块420用于将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像;光点识别模块430用于根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
在一些实施例中,所述颜色识别条件包括多组阈值区间,每组阈值区间对应一个给定颜色,每组阈值区间包括第二颜色空间中的至少一个颜色参数与该给定颜色所对应的阈值区间。
图像获取模块410还可以用于获得图像传感器采集的光点图像的raw图;对所述raw图处理获得所述光点图像在所述第一颜色空间的图像,作为所述第一图像。
图像转换模块420还可以用于滤除所述第一图像或第二图像中与所述光点大小的差值超过差值阈值的杂点。
在一些实施例中,光点识别模块430还可以用于将所述第二图像的颜色参数与所述多组阈值区间进行比较,根据比较结果确定目标颜色的光点。
在另一些实施例中,光点识别模块430还可以用于分析所述第二图像在第 二颜色空间中的各颜色参数的分布情况;根据所述分布情况,调节颜色参数与对应给定颜色的阈值区间。
可选的,所述第一颜色空间为RGB空间,所述第二颜色空间为HSV空间。每组阈值区间至少包括H参数与给定颜色所对应的阈值区间,H参数与给定颜色所对应的阈值区间为(Hcenter-hthres)<H<(Hcenter+hthres),其中,H为H参数,Hcenter为给定颜色预设的H对应值,hthres为给定颜色对应的容忍度。其中,H参数与给定颜色所对应的阈值区间的确定方法包括:将不同颜色转换到HSV空间;对H参数进行归一化;确定多个给定颜色与归一化后的H参数之间的对应关系,将对应的阈值区间作为H参数与给定颜色所对应的阈值区间。
可选的,纯色对应的hthres小于混色对应的hthres
可选的,每组阈值区间还包括S参数与给定颜色所对应的阈值区间,S参数与给定颜色所对应的阈值区间为(smin_thres)<S<(smax_thres),其中,S为S参数,smin_thres为给定颜色预设的饱和度下阈值,smax_thres为给定颜色预设的饱和度上阈值。可以根据所述第二图像的整体或局部饱和度设置smin_thres和smax_thres
可选的,每组阈值区间还包括V参数与给定颜色所对应的阈值区间,V参数与给定颜色所对应的阈值区间为(vmin_thres)<V<(vmax_thres),其中,V为V参数,vmin_thres为给定颜色预设的亮度下阈值,vmax_thres为给定颜色预设的亮度上阈值。可以根据所述第二图像的整体或局部亮度设置vmin_thres和vmax_thres
请参阅图15,本发明还提供了一种终端,具体来讲:终端500可以包括RF(Radio Frequency,射频)电路510、包括有一个或一个以上计算机可读存储介质的存储器520、输入单元530、显示单元540、图像传感器140、音频电路5160、WiFi(wireless fidelity,无线保真)模块570、包括有一个或者一个以上处理核心的处理器580、以及电源590等部件。本领域技术人员可以理解,图8中示出的终端结构并不构成对终端的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。其中:
RF电路510可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,交由一个或者一个以上处理器580处理;另外,将涉及上行的数据发送给基站。通常,RF电路510包括但不限于天线、至少一个放大器、调谐器、一个或多个振荡器、用户身份模块(SIM)卡、收发信机、 耦合器、LNA(Low Noise Amplifier,低噪声放大器)、双工器等。此外,RF电路510还可以通过无线通信与网络和其他设备通信。所述无线通信可以使用任一通信标准或协议,包括但不限于GSM(Global System of Mobile communication,全球移动通讯系统)、GPRS(General Packet Radio Service,通用分组无线服务)、CDMA(Code Division Multiple Access,码分多址)、WCDMA(Wideband Code Division Multiple Access,宽带码分多址)、LTE(Long Term Evolution,长期演进)、电子邮件、SMS(Short Messaging Service,短消息服务)等。
存储器520可用于存储软件程序以及模块,处理器580通过运行存储在存储器520的软件程序以及模块,从而执行各种功能应用以及数据处理。存储器520可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据终端500的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器520可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。相应地,存储器520还可以包括存储器控制器,以提供处理器580和输入单元530对存储器520的访问。
输入单元530可用于接收输入的数字或字符信息,以及产生与用户设置以及功能控制有关的键盘、鼠标、操作杆、光学或者轨迹球信号输入。具体地,输入单元530可包括触敏表面531以及其他输入设备532。触敏表面531,也称为触摸显示屏或者触控板,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触敏表面531上或在触敏表面531附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触敏表面531可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器580,并能接收处理器580发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触敏表面531。除了触敏表面531,输入单元530还可以包括其他输入设备532。具体地,其他输入设备532可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、 轨迹球、鼠标、操作杆等中的一种或多种。
显示单元540可用于显示由用户输入的信息或提供给用户的信息以及终端500的各种图形用户接口,这些图形用户接口可以由图形、文本、图标、视频和其任意组合来构成。显示单元540可包括显示面板541,可选的,可以采用LCD(Liquid Crystal Display,液晶显示器)、OLED(Organic Light-Emitting Diode,有机发光二极管)等形式来配置显示面板541。进一步的,触敏表面531可覆盖显示面板541,当触敏表面531检测到在其上或附近的触摸操作后,传送给处理器580以确定触摸事件的类型,随后处理器580根据触摸事件的类型在显示面板541上提供相应的视觉输出。虽然在图8中,触敏表面531与显示面板541是作为两个独立的部件来实现输入和输入功能,但是在某些实施例中,可以将触敏表面531与显示面板541集成而实现输入和输出功能。
终端500还可包括除图像传感器140以外的至少一种其他传感器,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板541的亮度,接近传感器可在终端500移动到耳边时,关闭显示面板541和/或背光。作为运动传感器的一种,重力加速度传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于终端500还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路560、扬声器561,传声器562可提供用户与终端500之间的音频接口。音频电路560可将接收到的音频数据转换后的电信号,传输到扬声器561,由扬声器561转换为声音信号输出;另一方面,传声器562将收集的声音信号转换为电信号,由音频电路560接收后转换为音频数据,再将音频数据输出处理器580处理后,经RF电路510以发送给比如另一终端,或者将音频数据输出至存储器520以便进一步处理。音频电路560还可能包括耳塞插孔,以提供外设耳机与终端500的通信。
WiFi属于短距离无线传输技术,终端500通过WiFi模块570可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图15示出了WiFi模块570,但是可以理解的是,其并不属于 终端500的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。
处理器580是终端500的控制中心,利用各种接口和线路连接整个终端的各个部分,通过运行或执行存储在存储器520内的软件程序和/或模块,以及调用存储在存储器520内的数据,执行终端500的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器580可包括一个或多个处理核心;优选的,处理器580可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器580中。
终端500还包括给各个部件供电的电源590(比如电池),优选的,电源可以通过电源管理系统与处理器580逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。电源590还可以包括一个或一个以上的直流或交流电源、再充电系统、电源故障检测电路、电源转换器或者逆变器、电源状态指示器等任意组件。
尽管未示出,终端500还可以包括摄像头、蓝牙模块等,在此不再赘述。具体在本发明实施例中,终端的显示单元是触摸屏显示器。终端还包括有存储器,以及一个或者一个以上的程序,其中一个或者一个以上程序存储于存储器中,且经配置以由一个或者一个以上处理器执行述一个或者一个以上程序包含用于进行以下操作的指令:获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像;将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像;根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
该方法的具体实施方式与前述实施例描述的光点识别方法的实施方式相同,这里不再赘述。
为进一步优化运动跟踪系统,本发明实施例还公开了另一种运动跟踪系统,与图1所示的运动跟踪系统的不同之处在于,所述图像传感器和所述图像处理装置均集成在终端中。其中,终端可以是头戴显示设备、智能手机、笔记本电脑、平板电脑、智能穿戴设备等等。
控制器120可以与终端通信,通常由用户在一只手或两只手中握住,以便于在控制器120上操作用户输入键等。在玩游戏或者进行虚拟现实活动的时候, 用户可以与游戏中的一个或多个角色进行交互。例如,控制器120可以接收来自用户的输入,并且基于接收到的输入将信号发送到终端,终端可以基于该信号来处理信号和/或改变游戏。在一些实施例中,控制器120可以从终端接收用于控制其组件的数据/信号。例如,终端可以发送交互请求等,控制器120可以接收交互请求并作出相应的反馈,例如,用户可以通过眼睛控制终端(例如头戴显示设备)打开某个功能,终端发送对应的请求至控制器120,控制器120接收到请求后发生震动,提醒用户开始操作。
综上所述,本发明实施例提供的方法、装置和系统可以通过第二颜色空间的颜色识别条件,在第二图像中识别出目标颜色的光点,只需要当前帧的图像信息,即可输出识别结果,没有延迟;此外,该方法需要的设备简单,只要在控制器上设置不同颜色的点光源即可,而且根据不同的目标颜色光点只需要修改颜色识别条件,不需要对硬件设备进行改变,同时也支持多个控制器一起使用。
在本发明所提供的实施例中,应该理解到,所揭露的方法,也可以通过其它的方式实现。以上所描述的实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本发明的实施例的方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本发明各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本发明的优选实施例而已,并不用于限制本发明,对于本领域的技术人员来说,本发明可以有各种更改和变化。凡在本发明的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本发明的保护范围之内。应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应所述以权利要求的保护范围为准。

Claims (21)

  1. 一种光点识别方法,其特征在于,包括:
    获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像;
    将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像;
    根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
  2. 根据权利要求1所述的方法,其特征在于,所述颜色识别条件包括多组阈值区间,每组阈值区间对应一个给定颜色,每组阈值区间包括第二颜色空间中的至少一个颜色参数与该给定颜色所对应的阈值区间。
  3. 根据权利要求2所述的方法,其特征在于,所述根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点,包括:
    将所述第二图像的颜色参数与所述多组阈值区间进行比较,根据比较结果确定目标颜色的光点。
  4. 根据权利要求2所述的方法,其特征在于,所述根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点,包括;
    分析所述第二图像在第二颜色空间中的各颜色参数的分布情况;
    根据所述分布情况,调节颜色参数与对应给定颜色的阈值区间。
  5. 根据权利要求1所述的方法,其特征在于,所述获得光点图像对应的第一图像,包括:
    获得图像传感器采集的光点图像的raw图;
    对所述raw图处理获得所述光点图像在所述第一颜色空间的图像,作为所述第一图像。
  6. 根据权利要求1所述的方法,其特征在于,所述根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点之前,还包括:
    滤除所述第一图像或第二图像中与所述光点大小的差值超过差值阈值的杂点。
  7. 根据权利要求1至6任意一项所述的方法,其特征在于,所述第一颜色空间为RGB空间,所述第二颜色空间为HSV空间。
  8. 根据权利要求7所述的方法,其特征在于,每组阈值区间至少包括H参数与给定颜色所对应的阈值区间,H参数与给定颜色所对应的阈值区间为(Hcenter-hthres)<H<(Hcenter+hthres),其中,H为H参数,Hcenter为给定颜色预设的H对应值,hthres为给定颜色对应的容忍度。
  9. 根据权利要求8所述的方法,其特征在于,H参数与给定颜色所对应的阈值区间的确定方法包括:
    将不同颜色转换到HSV空间;
    对H参数进行归一化;
    确定多个给定颜色与归一化后的H参数之间的对应关系,将对应的阈值区间作为H参数与给定颜色所对应的阈值区间。
  10. 根据权利要求8所述的方法,其特征在于,纯色对应的hthres小于混色对应的hthres
  11. 根据权利要求8所述的方法,其特征在于,每组阈值区间还包括S参数与给定颜色所对应的阈值区间,S参数与给定颜色所对应的阈值区间为(smin_thres)<S<(smax_thres),其中,S为S参数,smin_thres为给定颜色预设的饱和度下阈值,smax_thres为给定颜色预设的饱和度上阈值。
  12. 根据权利要求11所述的方法,其特征在于,根据所述第二图像的整体或局部饱和度设置smin_thres和smax_thres
  13. 根据权利要求8所述的方法,其特征在于,每组阈值区间还包括V参数与给定颜色所对应的阈值区间,V参数与给定颜色所对应的阈值区间为(vmin_thres)<V<(vmax_thres),其中,V为V参数,vmin_thres为给定颜色预设的亮度下阈值,vmax_thres为给定颜色预设的亮度上阈值。
  14. 根据权利要求13所述的方法,其特征在于,根据所述第二图像的整体或局部亮度设置vmin_thres和vmax_thres
  15. 一种光点识别装置,其特征在于,包括:
    图像获取模块,用于获得光点图像对应的第一图像,所述第一图像为所述光点图像在第一颜色空间中的图像;
    图像转换模块,用于将所述第一图像转换成第二图像,所述第二图像为所述光点图像在第二颜色空间中的图像;
    光点识别模块,用于根据预设的第二颜色空间的颜色识别条件,在所述第二图像中识别出目标颜色的光点。
  16. 一种图像处理装置,其特征在于,包括:处理器以及存储器,其中,所述处理器连接所述存储器,所述存储器用于存储指令,所述处理器用于执行所述指令,当所述处理器在执行所述指令时,可根据所述指令执行如权利要求1-14中任一权利要求的方法,以实现对图像进行处理以识别出目标颜色的光点。
  17. 一种图像处理系统,其特征在于,包括图像传感器以及图像处理装置,所述图像处理装置包括处理器以及存储器,所述处理器连接所述存储器与所述图像传感器,所述存储器用于存储指令,所述处理器用于执行所述指令,当所述处理器在执行所述指令时,可根据所述指令执行如权利要求1-14中任一权利要求的方法,以实现对图像传感器采集的图像进行处理以识别出目标颜色的光点。
  18. 一种终端,其特征在于,包括:图像传感器、存储器以及分别与所述图像传感器、存储器耦合的处理器,所述存储器用于存储指令以及所述图像传感器采集到的图像,所述处理器用于执行所述指令,当所述处理器在执行所述指令时,可根据所述指令执行如权利要求1-14中任一权利要求的方法,以实现对所述图像传感器采集到的图像进行处理以识别出目标颜色的光点。
  19. 一种跟踪定位系统,其特征在于,包括终端、设置有点光源的控制器,其中,所述终端设置有图像传感器,所述终端为如权利要求18所述的终端。
  20. 一种跟踪定位系统,其特征在于,包括图像传感器、设置有点光源的控制器以及图像处理装置,所述图像处理装置分别耦合所述图像传感器以及所述控制器,其中,所述图像处理装置为如权利要求16所述的图像处理装置。
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储了光点识别的程序代码,所述程序代码包括用于执行如权利要求1-14任意一权利要求所述的方法的指令。
PCT/CN2017/099544 2017-08-29 2017-08-29 光点识别方法、装置以及系统 WO2019041147A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201780007690.5A CN108701365B (zh) 2017-08-29 2017-08-29 光点识别方法、装置以及系统
US16/314,444 US10922846B2 (en) 2017-08-29 2017-08-29 Method, device and system for identifying light spot
PCT/CN2017/099544 WO2019041147A1 (zh) 2017-08-29 2017-08-29 光点识别方法、装置以及系统

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2017/099544 WO2019041147A1 (zh) 2017-08-29 2017-08-29 光点识别方法、装置以及系统

Publications (1)

Publication Number Publication Date
WO2019041147A1 true WO2019041147A1 (zh) 2019-03-07

Family

ID=63843826

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/099544 WO2019041147A1 (zh) 2017-08-29 2017-08-29 光点识别方法、装置以及系统

Country Status (3)

Country Link
US (1) US10922846B2 (zh)
CN (1) CN108701365B (zh)
WO (1) WO2019041147A1 (zh)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111275042B (zh) * 2020-01-21 2023-07-18 支付宝实验室(新加坡)有限公司 伪造证件的识别方法、装置及电子设备
CN112000241B (zh) * 2020-07-31 2023-11-28 青岛海尔科技有限公司 操作识别方法和装置、存储介质及电子装置
CN112702586B (zh) * 2020-12-21 2023-06-30 极米科技股份有限公司 基于可见光的投影仪虚拟触控跟踪方法、装置及系统
CN112861111A (zh) * 2021-02-04 2021-05-28 深圳市海雀科技有限公司 设备认证方法及装置
CN113506244A (zh) * 2021-06-05 2021-10-15 北京超维世纪科技有限公司 基于深度学习指示灯检测及颜色识别泛化能力提升算法
CN114205482B (zh) * 2021-11-02 2024-01-05 百度在线网络技术(北京)有限公司 扫描装置、扫描控制方法、电子设备及存储介质
US20240155254A1 (en) * 2021-12-31 2024-05-09 Honor Device Co., Ltd. Image Processing Method and Related Electronic Device
CN117455927B (zh) * 2023-12-21 2024-03-15 万灵帮桥医疗器械(广州)有限责任公司 光斑阵列分割与光斑偏移量计算方法、装置、设备及存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447029A (zh) * 2008-12-11 2009-06-03 赵怀志 公路前方构造物自动识别方法
CN102938053A (zh) * 2011-08-16 2013-02-20 汪建 一种基于计算机视觉的甘蔗特征提取与识别方法
CN103186905A (zh) * 2011-12-28 2013-07-03 现代自动车株式会社 车用颜色检测器
CN103310201A (zh) * 2013-06-26 2013-09-18 武汉烽火众智数字技术有限责任公司 目标混合颜色识别方法

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101117591B1 (ko) * 2004-01-05 2012-02-24 코닌클리케 필립스 일렉트로닉스 엔.브이. 렌더링되지 않은 컬러 공간을 통해 맵핑된 비디오 컨텐츠로부터 유도된 환경광을 위한 플리커가 없는 적응성 임계
US20070091111A1 (en) * 2004-01-05 2007-04-26 Koninklijke Philips Electronics N.V. Ambient light derived by subsampling video content and mapped through unrendered color space
JP2007519995A (ja) * 2004-01-05 2007-07-19 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ ビデオ内容から未レンダリング色空間を経由したマッピング変換によって導出される環境光
RU2352081C2 (ru) * 2004-06-30 2009-04-10 Кониклейке Филипс Электроникс, Н.В. Выделение доминирующего цвета с использованием законов восприятия для создания окружающего освещения, получаемого из видеоконтента
US7689035B2 (en) * 2005-06-17 2010-03-30 The Regents Of The University Of California Methods for identifying, separating and editing reflection components in multi-channel images and videos
JP2007164737A (ja) * 2005-12-16 2007-06-28 Mitsubishi Heavy Ind Ltd 色識別方法
US8063911B2 (en) * 2007-11-30 2011-11-22 Texas Instruments Incorporated System and method for gamut mapping of out-of-gamut signals
DE102010028904A1 (de) * 2010-05-11 2011-11-17 Movolution Gmbh Bewegunganalyse- und/oder -Verfolgungssystem
EP2580740A4 (en) * 2010-06-10 2016-05-25 Tata Consultancy Services Ltd INVARIABLE AND ROBUST LIGHTING APPARATUS AND METHOD FOR DETECTING AND RECOGNIZING MULTIPLE SIGNALING PANELS
WO2013006329A2 (en) * 2011-07-01 2013-01-10 3G Studios, Inc. Automated facial detection and eye tracking techniques implemented in commercial and consumer environments
US9516239B2 (en) * 2012-07-26 2016-12-06 DePuy Synthes Products, Inc. YCBCR pulsed illumination scheme in a light deficient environment
CN102831420B (zh) * 2012-08-17 2015-04-01 银江股份有限公司 基于颜色信息和随机圆检测的圆形交通标志定位方法
US9641725B2 (en) * 2012-11-27 2017-05-02 Philips Lighting Holding B.V. Use of ambience light for copy protection of video content displayed on a screen
JP6349707B2 (ja) * 2013-12-11 2018-07-04 セイコーエプソン株式会社 色変換装置、ルックアップテーブル生成方法、及び、ルックアップテーブル生成装置
WO2015133130A1 (ja) * 2014-03-06 2015-09-11 日本電気株式会社 映像撮影装置、信号分離装置および映像撮影方法
US20170085768A1 (en) * 2014-05-20 2017-03-23 Philips Lighting Holding B.V. An image capturing system, a kit for an image capturing system, a mobile phone, use of an image capturing system and a method of configuring a color matched light source
CN103984482A (zh) * 2014-05-28 2014-08-13 重庆大学 基于普通摄像头的激光笔绘图方法
CN104463812B (zh) * 2014-12-31 2018-08-24 中国科学院深圳先进技术研究院 修复在拍摄时受到雨滴干扰的视频图像的方法
CN104821000B (zh) * 2015-05-22 2017-12-22 京东方科技集团股份有限公司 颜色辨识系统、颜色辨识方法和显示装置
CN105243664B (zh) * 2015-10-08 2018-07-17 东南大学 一种基于视觉的轮式移动机器人快速目标跟踪方法
US10013617B2 (en) * 2015-12-03 2018-07-03 Gm Global Technology Operations Snow covered path of travel surface condition detection
US10095928B2 (en) * 2015-12-22 2018-10-09 WorldViz, Inc. Methods and systems for marker identification
CN106503704B (zh) * 2016-10-21 2018-03-23 河南大学 一种自然场景中圆形交通标志定位方法
CN108227353B (zh) * 2016-12-14 2020-12-22 中强光电股份有限公司 光源模块以及投影装置

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101447029A (zh) * 2008-12-11 2009-06-03 赵怀志 公路前方构造物自动识别方法
CN102938053A (zh) * 2011-08-16 2013-02-20 汪建 一种基于计算机视觉的甘蔗特征提取与识别方法
CN103186905A (zh) * 2011-12-28 2013-07-03 现代自动车株式会社 车用颜色检测器
CN103310201A (zh) * 2013-06-26 2013-09-18 武汉烽火众智数字技术有限责任公司 目标混合颜色识别方法

Also Published As

Publication number Publication date
CN108701365B (zh) 2022-05-31
CN108701365A (zh) 2018-10-23
US20190385340A1 (en) 2019-12-19
US10922846B2 (en) 2021-02-16

Similar Documents

Publication Publication Date Title
US10922846B2 (en) Method, device and system for identifying light spot
US11307733B2 (en) Always on display method and electronic device
US10142845B2 (en) Network connection method and apparatus, and computer storage medium
US9697622B2 (en) Interface adjustment method, apparatus, and terminal
CN107966209B (zh) 环境光检测方法、装置、存储介质和电子设备
CN107038681B (zh) 图像虚化方法、装置、计算机可读存储介质和计算机设备
CN107977144B (zh) 一种截屏处理方法及移动终端
CN108984091B (zh) 截屏方法、装置、存储介质及电子设备
US20220005439A1 (en) Method for display-brightness adjustment and related products
WO2015003522A1 (zh) 人脸识别方法、装置和移动终端
CN107885448B (zh) 应用触摸操作的控制方法、移动终端及可读存储介质
CN107067842B (zh) 色值调整方法、移动终端及存储介质
CN109901980B (zh) 串口日志抓取电路、方法、终端及计算机可读存储介质
CN105989572B (zh) 图片处理方法及装置
JP7467667B2 (ja) 検出結果出力方法、電子機器及び媒体
CN107329572B (zh) 控制方法、移动终端及计算机可读存储介质
CN108459799B (zh) 一种图片的处理方法、移动终端及计算机可读存储介质
CN107705247B (zh) 一种图像饱和度的调整方法、终端及存储介质
CN110198379B (zh) 移动终端控制方法、移动终端及计算机可读存储介质
US20160357434A1 (en) Method for controlling a display of an electronic device and the electronic device thereof
CN111131612B (zh) 屏幕色温控制方法、装置、存储介质及移动终端
KR102236203B1 (ko) 서비스를 제공하는 방법 및 그 전자 장치
CN110336917B (zh) 一种图片展示方法、装置、存储介质及终端
CN113676663B (zh) 相机白平衡调整方法、装置、存储介质及终端设备
CN107807876B (zh) 分屏显示方法、移动终端及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17923350

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17923350

Country of ref document: EP

Kind code of ref document: A1