WO2020000397A1 - 电子设备识别激光器发射的激光点方法及电子设备 - Google Patents

电子设备识别激光器发射的激光点方法及电子设备 Download PDF

Info

Publication number
WO2020000397A1
WO2020000397A1 PCT/CN2018/093748 CN2018093748W WO2020000397A1 WO 2020000397 A1 WO2020000397 A1 WO 2020000397A1 CN 2018093748 W CN2018093748 W CN 2018093748W WO 2020000397 A1 WO2020000397 A1 WO 2020000397A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
frame
pixel
difference
laser
Prior art date
Application number
PCT/CN2018/093748
Other languages
English (en)
French (fr)
Inventor
周琨
罗合见
Original Assignee
深圳市欢创科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市欢创科技有限公司 filed Critical 深圳市欢创科技有限公司
Priority to PCT/CN2018/093748 priority Critical patent/WO2020000397A1/zh
Publication of WO2020000397A1 publication Critical patent/WO2020000397A1/zh

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A01AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
    • A01KANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
    • A01K5/00Feeding devices for stock or game ; Feeding wagons; Feeding stacks
    • A01K5/02Automatic devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to the technical field of image analysis, and in particular, to a method for identifying a laser spot emitted by a laser by an electronic device and an electronic device.
  • Feeding and interactive entertainment are two key core functions of pet robots.
  • interactive entertainment methods include two-way call interaction and laser interaction.
  • the pet owner can watch the activity status of the pet at home anytime and anywhere through the smart phone APP, and interact with the pet through laser interaction. For example, when the user clicks A on the mobile phone screen, the corresponding , The laser spot will be projected at the position corresponding to position A in the home.
  • the user In order to control the position where the laser spot is projected at home, the user needs to find the laser spot on the image on the screen of the mobile phone.
  • the traditional technique is for the user to manually find the laser spot.
  • the recognition rate of the laser spot in the image is not high, and the user has not been able to effectively control the position where the laser spot is projected.
  • An object of the embodiments of the present disclosure is to provide an electronic device for identifying a laser point emitted by a laser, and an electronic device, which has a high accuracy rate for identifying a laser point.
  • the embodiments of the present disclosure provide the following technical solutions:
  • an electronic device including:
  • At least one processor At least one processor
  • a first frame image and a second frame image are collected in two states of the laser emitting laser and the suspended laser emitting, wherein one of the first frame image and the second frame image includes the laser spot emitted by the laser , The other frame does not contain the laser spot;
  • the position of the laser spot in the first frame image or the second frame image is identified.
  • the camera module includes a camera and a driving component, the driving component is connected to the camera, and the driving component is used for driving the camera to move.
  • the driving assembly includes a linear motor, a rotary motor, a slide rail, and a bracket, and an output end of the linear motor is connected to a fixed end of the rotary motor, and the bracket is fixedly installed on an output end of the rotary motor
  • the camera and the laser are both mounted on the bracket, and the camera is housed in the slide rail and can move freely along the slide rail.
  • the camera module includes a fill light component, and the fill light component is configured to fill the camera with light when the camera captures an image.
  • the camera includes one or more optical sensors and a lens, and the one or more optical sensors are disposed on an imaging surface of the lens.
  • the laser spot is identified in the first frame image or the second frame image Location, including:
  • a position of the laser spot in the first frame image or the second frame image is identified.
  • the identifying the position of the laser spot in the first frame image or the second frame image based on the frame difference image includes:
  • an embodiment of the present disclosure provides a method for identifying a laser spot emitted by a laser by an electronic device, including:
  • the electronic device collects a first frame image and a second frame image, wherein one of the first frame image and the second frame image includes the laser The emitted laser spot, the other frame does not contain the laser spot;
  • the electronic device identifies a position of the laser spot in the first frame image or the second frame image based on pixel difference information of the acquired first frame image and the acquired second frame image. .
  • the electronic device recognizes that the laser spot is in the first frame image or the second frame based on pixel difference information of the collected first frame image and the collected second frame image.
  • Position in the image including:
  • the electronic device identifies a position of the laser spot in the first frame image or the second frame image based on the frame difference image.
  • the color of the laser point corresponds to a target color component
  • the first frame image includes the laser point
  • the second frame image does not include the laser point
  • the generating, by the electronic device, frame difference images based on pixel difference information of the captured first frame image and the captured second frame image includes:
  • each frame difference pixel in the frame difference map meets at least the following conditions:
  • the frame difference image corresponds to the same coordinate
  • the target color component in the pixel is set to a first preset color value
  • the pixels in the frame difference image corresponding to the same coordinates is set to a second preset color value, and the first preset color value is different from the second preset color value.
  • the color of the laser spot is red
  • the target color component is a red component
  • the non-target color component is a green component and a blue component
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the frame difference image
  • the red component of the middle pixel is the first preset color value
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, the pixels in the frame difference image
  • the red component is the second preset color value
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold
  • the first frame When the difference between the green component of each pixel in the image minus the green component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the green component of the pixel in the frame difference image is a third preset Color value
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, or the first frame image
  • the difference between the green component of each pixel in the second frame image minus the green component of each pixel in the second frame image is less than a preset frame difference threshold
  • the green component of the pixel in the frame difference image is a fourth preset color value
  • the third preset color value is different from the fourth preset color value
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold
  • the first frame When the difference between the blue component of each pixel in the image minus the blue component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the blue component of the pixel in the frame difference image is Five preset color values;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, or the first frame image
  • the difference between the blue component of each pixel in the second frame image minus the blue component of each pixel in the second frame image is less than a preset frame difference threshold
  • the blue component of the pixel in the frame difference image is a sixth preset A color value
  • the fifth preset color value is different from the sixth preset color value.
  • the electronic device identifying the position of the laser spot in the first frame image or the second frame image based on the frame difference image includes:
  • the electronic device converts the frame difference image into a binary image
  • the electronic device searches one or more image connected domains from the binary image
  • the electronic device acquires coordinate positions of the selected one or more image connected domains in the first frame image or the second frame image.
  • the converting, by the electronic device, the frame difference image into a binary image includes:
  • the electronic device sets the pixel value of the pixel to the first preset pixel value
  • the electronic device sets the pixel value of the pixel to a second preset pixel value, and the first preset pixel The value is different from the second preset pixel value.
  • the electronic device selecting, from the one or more image connected domains, one or more image connected domains that satisfy a laser point image condition as a laser spot, including:
  • the electronic device calculates a weight of each of the image connected domains
  • the electronic device traverses the image connected domain with the highest weight from the one or more image connected domains, and selects the image connected domain with the highest weight as the laser spot.
  • the type of the image connectivity domain includes the image connectivity domain being processed and the current candidate image connectivity domain, and both the image connectivity domain being processed and the current candidate image connectivity domain include several feature values, each The eigenvalues all correspond to weights;
  • the electronic device calculates a weighted image weight of each of the image connected domains, including:
  • the electronic device jointly adds several weights of the feature values to obtain the weights of the image connected domain being processed or the current candidate image connected domain.
  • the electronic device traversing the image connectivity domain with the highest weight from the one or more image connectivity domains according to the comparison result includes:
  • the electronic device replaces the image connectivity domain being processed with the current candidate image connectivity domain and uses it as a new current candidate image connectivity domain;
  • the electronic device discards the image connectivity domain being processed and retains the current candidate image connectivity domain
  • the electronic device determines whether there is a next image connected domain being processed
  • the electronic device uses the next image connected domain being processed as the image connected domain being processed, and returns to continue to determine whether the weight of the image connected domain being processed is greater than the weight of the current candidate image connected domain;
  • the electronic device determines whether there is a current candidate image connected domain
  • the electronic device uses the current candidate image connected domain as the optimal image connected domain;
  • the electronic device determines that there is no optimal image connectivity domain.
  • the type of the characteristic value includes any one or more of the following:
  • center brightness difference value Is the absolute value of the difference between the brightness of the corresponding pixel point in the first frame image and the brightness of the corresponding pixel point in the second frame image at the center of the image connected domain;
  • a target color pixel ratio R 1 where the target color pixel ratio R 1 is a ratio of the target color pixel to all pixels in the image connected domain in a region corresponding to the image connected domain in the first frame image;
  • a ratio of white pixels to R w a ratio of white pixels to all pixels in the connected domain in a region corresponding to the image connected domain in the first frame of image
  • Target color component ratio R 2 is the area of the frame difference map corresponding to the image connected domain. Except for white, each color pixel based on the target color component accounts for all pixels in the image connected domain. The ratio;
  • the target color component ratio R 2 is the ratio of the red, pink, and yellow pixels in the area corresponding to the image connectivity domain of the frame difference map to all pixels in the image connectivity domain. ratio.
  • the method further includes:
  • the electronic device marks the laser spot.
  • an embodiment of the present disclosure provides a computer program product.
  • the computer program product includes a computer program stored on a non-volatile computer-readable storage medium.
  • the computer program includes program instructions. When the instruction is executed by the electronic device, the electronic device is caused to execute the method for identifying a laser spot emitted by a laser according to any one of the electronic devices.
  • an embodiment of the present disclosure further provides a non-volatile computer-readable storage medium, where the non-volatile computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used for The electronic device is caused to perform the method for identifying a laser spot emitted by a laser according to any one of the electronic devices.
  • a first frame image and a second frame image are collected, where , One of the first frame image and the second frame image includes a laser spot emitted by a laser, and the other frame does not include the laser spot. Therefore, according to the pixel difference information of the first frame image and the second frame image, It can adapt to complex imaging environment, and quickly and accurately identify the position of the laser spot from the first frame image or the second frame image.
  • FIG. 1 is a schematic diagram of a pet device provided by an embodiment of the present disclosure
  • FIG. 1a is a schematic diagram of a functional block diagram of the camera module in FIG. 1;
  • FIG. 1a is a schematic diagram of a functional block diagram of the camera module in FIG. 1;
  • FIG. 1b is a schematic diagram of a functional block diagram of the driving component in FIG. 1a;
  • 1c is a schematic diagram of a pet device installed in a house provided by an embodiment of the present disclosure
  • FIG. 1d is a schematic diagram of a user interface of a pet application of a user terminal according to an embodiment of the present disclosure
  • 1e is a schematic diagram of a user interface for marking a laser point according to an embodiment of the present disclosure
  • FIG. 2 is a schematic flowchart of a method for identifying a laser point according to an embodiment of the present disclosure
  • FIG. 2a is a schematic diagram of a pet device interacting with a pet according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of S22 in FIG. 2;
  • 3a is a frame image including a laser point in a complex imaging environment according to an embodiment of the present disclosure
  • FIG. 3b according to an embodiment of the present disclosure provides another frame image that does not include a laser point in a complex imaging environment
  • FIG. 4 is a frame difference image obtained by performing pixel frame difference calculation of FIG. 3a and FIG. 3b according to an embodiment of the present disclosure
  • 4a is a schematic flowchart of S222 in FIG. 3;
  • 4b is a schematic diagram of a binarized frame difference image after binarizing processing in FIG. 4;
  • FIG. 4c is a schematic diagram of each image connected domain in FIG. 4 being enveloped by a rectangle;
  • FIG. 4d is a schematic diagram of an image connected domain that meets the basic shape condition of the laser spot in FIG. 4;
  • FIG. 5 is a schematic flowchart of S2222 in FIG. 4a;
  • 5a is a schematic flowchart of S51 in FIG. 5;
  • 5b is a schematic flowchart of S53 in FIG. 5;
  • Figure 5c is a schematic diagram of marking the laser spot in Figure 3a;
  • FIG. 6 is a schematic structural diagram of a laser point recognition device according to an embodiment of the present disclosure.
  • FIG. 6a is a schematic structural diagram of an identification module in FIG. 6;
  • FIG. 6b is another schematic structural diagram of the identification module in FIG. 6; FIG.
  • 6c is a schematic structural diagram of a laser point identification device according to another embodiment of the present disclosure.
  • FIG. 6d is a schematic structural diagram of a laser spot recognition device according to still another embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
  • the laser point recognition method in the embodiment of the present disclosure may be executed in any suitable type of electronic device with computing capability.
  • the electronic device may be a user terminal, and the user terminal includes a smart phone, a computer, and a handheld computer ( Personal Digital Assistant (PDA), tablet, smart watch or desktop computer, etc.
  • the electronic device may also be a pet device (such as a pet robot, or any other machine device with a laser source), the pet device is configured at home or another suitable location, and the pet device receives the user's control Order to adjust the projection position of the laser point at home, so as to achieve petting.
  • a pet device such as a pet robot, or any other machine device with a laser source
  • the electronic device of the embodiment of the present disclosure may be configured into any suitable shape.
  • the pet device may be fixedly installed at a specific position in the home, or may be a mobile robot that moves according to a preset logic at home.
  • the pet device 100 includes a control unit 11, a wireless communication unit 12, a feeding unit 13, an audio unit 14, a camera module 15, and a laser 16.
  • the control unit 11 serves as a control core of the pet device 100 and coordinates the work of the various units.
  • the control unit 11 may be a general-purpose processor (such as a central processing unit CPU), a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA, CPLD, etc.), a single-chip microcomputer, and an ARM (Acorn RISC Machine). ) Or other programmable logic devices, discrete gate or transistor logic, discrete hardware components, or any combination of these components.
  • the control unit 11 may be any conventional processor, controller, microcontroller, or state machine.
  • the control unit 11 may also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, multiple microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • the wireless communication unit 12 is configured to wirelessly communicate with a user terminal, and the wireless communication unit 12 is electrically connected to the control unit 11.
  • the user sends a control instruction to the pet petting device 100 through the user terminal
  • the wireless communication unit 12 receives the control instruction and sends the control instruction to the control unit 11, and the control unit 11 controls the pet petting device 100 according to the control instruction.
  • the wireless communication unit 12 includes a combination of one or more of a broadcast receiving module, a mobile communication module, a wireless Internet module, a short-range communication module, and a positioning information module.
  • the broadcast receiving module receives a broadcast signal and / or broadcast related information from an external broadcast management server via a broadcast channel.
  • the broadcast receiving module can use a digital broadcasting system to receive digital broadcasting signals.
  • Digital broadcasting systems such as digital terrestrial multimedia broadcasting (DMB-T), satellite digital multimedia broadcasting (DMB-S), media forward link only (MediaFLO), handheld Digital Video Broadcasting (DVB-H) or Terrestrial Integrated Services Digital Broadcasting (ISDB-T).
  • the mobile communication module sends a wireless signal to at least one of a base station, an external terminal, and a server on a mobile communication network, or can receive a wireless signal from at least one of the base station, an external terminal, and a server.
  • the wireless signal may include a voice call signal, a video call signal, or various forms of data.
  • the wireless Internet module refers to a module for wireless Internet connection, and can be built-in or external to the terminal.
  • Wireless Internet technologies such as Wireless LAN (WLAN) (Wi-Fi), Wireless Broadband (Wibro), Global Microwave Access Interoperability (Wimax), High Speed Downlink Packet Access (HSDPA) can be used.
  • WLAN Wireless LAN
  • Wibro Wireless Broadband
  • Wimax Global Microwave Access Interoperability
  • HSDPA High Speed Downlink Packet Access
  • the short-range communication module refers to a module for short-range communication.
  • Short-range communication technologies such as Bluetooth, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), or ZigBee can be used.
  • RFID Radio Frequency Identification
  • IrDA Infrared Data Association
  • UWB Ultra Wideband
  • ZigBee ZigBee
  • the positioning information module is a module for obtaining a position of a mobile terminal, such as a Global Positioning System (GPS) module.
  • GPS Global Positioning System
  • the feeding unit 13 is used to release food to the pet.
  • the feeding unit 13 is electrically connected to the control unit 11.
  • the control unit 11 sends a feeding instruction to the feeding unit 13, and the feeding unit 13 feeds the pet according to the feeding instruction. Therefore, when the user is unable to take care of the pet at home in real time, the user may remotely control the feeding unit 13 to release food, or the feeding unit 13 may release food to the pet at a preset time.
  • the feeding unit 13 feeds different foods. For example, when the pets are dogs, the feeding unit 13 feeds dog food; when the pets are cats, the feeding unit 13 feeds cat food; when the pets are fish, the feeding unit 13 feeds fish food.
  • the feeding unit 12 includes a storage device and a driving mechanism.
  • the storage device is used to carry food, and the driving mechanism is used to control the storage device to release food.
  • the storage device may be a device such as a turntable, a storage box, and the like.
  • the driving mechanism includes a motor, a rotating shaft, a gear transmission mechanism, and a bracket.
  • the motor is electrically connected to the control unit 11.
  • the output shaft of the motor is connected to one end of the rotating shaft to rotate
  • the other end of the shaft is connected to the gear transmission mechanism, and one end of the bracket is connected to the gear transmission mechanism, and the other end of the bracket is connected to the storage device.
  • the motor sends a control instruction according to the control unit 11 to drive the working state of the gear transmission mechanism. For example, when feeding is required, the motor drives the gear rotation mechanism to rotate, and the gear transmission mechanism drives the storage device to move through the bracket, so that the storage device moves food out of the external environment of the pet device. After the storage device moves to a preset distance, the motor stops driving the gear transmission mechanism to rotate.
  • the audio unit 14 is configured to collect sounds from the environment around the pet device 100 or push sounds.
  • the audio unit 14 is electrically connected to the control unit 11.
  • the user When the user communicates with the pet by voice, the user sends voice information through the user terminal, the wireless communication unit 12 receives the voice information and sends the voice information to the control unit 11, and the control unit 11 pushes the voice information out through the audio unit 14, For example, the user sends out “Jerry, is today ’s lunch delicious”? Then, the audio unit 14 pushes “Jerry, is today ’s lunch delicious” to the pet device 100, and the pet hears the owner ’s voice and calls out a series of excited sound.
  • the audio unit 14 collects the sound of the pet, and feeds back the sound of the pet to the user terminal.
  • the user listens to the sound of the pet, and judges the pet's current emotion based on the sound of the pet, so that the user can know the current life status of the pet in a timely and comprehensive manner.
  • the audio unit 14 may be an electroacoustic transducer such as a speaker, a speaker, a microphone, or the like, wherein the number of speakers or speakers may be one or more, the number of microphones may be multiple, and multiple microphones may be Form a microphone array to efficiently capture sound.
  • the microphone can be electric (moving coil, belt), condenser (DC polarization), piezoelectric (crystal, ceramic), electromagnetic, carbon particle, semiconductor, etc. Or any combination thereof.
  • the microphone may be a micro-electromechanical system (MEMS) microphone.
  • MEMS micro-electromechanical system
  • the camera module 15 is used to capture the environment of the pet.
  • the camera module 15 is electrically connected to the control unit 11.
  • the camera module 15 obtains an image of the environment of the pet and outputs the image to the control unit 11 so that the control unit 11 can This image makes the next logical operation.
  • the camera module 15 includes a camera 151 and a driving component 152, the driving component 152 is connected to the camera 151, and the driving component 152 is used to drive the camera 151 to move, so that the camera 151 can capture images from multiple angles.
  • the camera includes one or more optical sensors and lenses, and the one or more optical sensors are disposed on the imaging surface of the lens.
  • the generated optical image is projected onto the optical sensor through the lens.
  • Optical sensors include charge-coupled devices (CCD) and complementary metal oxide semiconductors (CMOS).
  • CMOS sensors can be back-illuminated CMOS sensors or stacked CMOS sensors.
  • the camera also integrates an ISP (Image Signal Processor).
  • the ISP is used to process the output data of the optical sensor, such as AEC (Automatic Exposure Control), AGC (Automatic Gain Control), and AWB (Automatic Gain Control). Automatic white balance), color correction and other functions.
  • the camera module 15 further includes a fill light component 153.
  • the fill light component 153 is configured to fill the camera 151 with light when the camera 151 captures an image. For example, when the light in the room environment is insufficient, the control unit 11 activates the fill light component 153 to emit light, and therefore, the camera module 15 can more clearly capture the object.
  • the fill light component 153 may be a light source such as an LED lamp.
  • the laser 16 is used to project a laser point.
  • the laser 16 is electrically connected to the control unit 11.
  • the control unit 11 controls the direction in which the laser 16 projects the laser point, and further adjusts the projection position of the laser point.
  • the laser 16 includes any type of laser source capable of projecting a laser spot, and the laser source includes a solid laser, a gas laser, a liquid laser, a semiconductor laser, a free electron laser, and the like.
  • the laser area may be a light emitting area of a certain area.
  • the laser area is customarily referred to as a laser spot, that is, a laser spot. It may be a light emitting area of a certain area.
  • the basic characteristics of a laser spot are a bright white center, a red color, and a circular or circular-like shape (such as a regular or irregular ellipse). It can be understood that the color around the laser spot can be not only red, but also green and so on.
  • the driving assembly 152 includes a linear motor 1521, a rotary motor 1522, a slide rail 1523, and a bracket 1524.
  • the output end of the linear motor 1521 is connected to the fixed end of the rotary motor 1522.
  • the bracket 1524 is fixedly mounted on the rotary motor 1522.
  • the camera 151 and the laser 16 are both mounted on the bracket 1524.
  • the camera 151 is housed in the slide rail 1523 and can move freely up and down along the slide rail 1523.
  • the linear motor 1521 and the rotary motor 1522 are controlled by the control unit 11. When the linear motor 1521 rotates forward, the output end of the linear motor 1521 drives the rotary motor 1522 to move upward. The rotary motor 1522 drives the camera 151 and the laser 16 to move upward along the slide rail 1523 through the bracket 1524. When the linear motor 1521 is reversed, the output end of the linear motor 1521 drives the rotary motor 1522 to move downward. The rotary motor 1522 drives the camera 151 and the laser 16 to move downward along the slide rail 1523 through the bracket 1524.
  • the output end of the rotating electric machine 1522 drives the camera 151 and the laser 16 to rotate clockwise through the bracket 1524.
  • the output end of the rotating electric machine 1522 drives the camera 151 and the laser 16 to rotate counterclockwise through the bracket 1524.
  • Pets are more interested in lasers.
  • the pet follows the laser spot and moves to that certain position. Therefore, when the laser point is projected at different positions, the pet will constantly switch its position in order to chase the laser point, thereby achieving the interaction between the user and the pet.
  • the petting device 100 is installed on the balcony 110 of the home.
  • the pet 120 is in the home, and the user 150 operates the user terminal 130 to remotely control the petting device 100 through the Internet.
  • the user terminal 130 supports the installation of various desktop applications, such as one or more of the following desktop applications: a pet application, a drawing application, a presentation application, a word processing application, a spreadsheet application, Game application, phone application, video conference application, email application, instant messaging application, training support application, photo application, digital camera application, digital video recorder application, web browsing application, digital music player Player applications, digital video player applications, and more.
  • desktop applications such as one or more of the following desktop applications: a pet application, a drawing application, a presentation application, a word processing application, a spreadsheet application, Game application, phone application, video conference application, email application, instant messaging application, training support application, photo application, digital camera application, digital video recorder application, web browsing application, digital music player Player applications, digital video player applications, and more.
  • the user interface 300 of the pet pet application is used to present the current environment in which pet pet device 100 shoots pets.
  • User interface 300 includes multiple Virtual buttons, such as a return button 31, a setting button 32, a camera button 33, a video button 34, a feeding button 35, an audio button 36, or a laser button (not shown).
  • a return button 31 a setting button 32
  • a camera button 33 a camera button 33
  • a video button 34 a feeding button 35
  • an audio button 36 or a laser button (not shown).
  • a laser button By clicking a specific virtual button, the user can remotely control the special status of the pet device 100. For example, when the user clicks the feeding button 35, the user terminal 130 sends a feeding instruction to the pet device 100, and the feeding unit 13 starts feeding according to the feeding instruction .
  • the user terminal 130 sends a laser mode instruction to the pet device 100
  • the laser 16 starts to project laser light according to the laser mode instruction
  • the wireless communication unit 12 sends an image corresponding to the position where the laser spot is projected to
  • the user terminal displays the image through a user interface of the user terminal. Therefore, in the laser mode, the user recognizes the position of the laser point from the image with the naked eye, and when the projection position of the laser point needs to be adjusted, the user clicks elsewhere on the user interface, so the pet device 100 controls the laser 16 Project the laser spot at home corresponding to where you clicked. At the same time, the camera module 15 moves following the movement of the laser 16.
  • the imaging environment at home is more complicated, for example, there is a large black background in the home (for example, a screen on which the television does not display a picture), or there is a transparent object (for example, window glass).
  • a black background in the home for example, a screen on which the television does not display a picture
  • a transparent object for example, window glass.
  • the user cannot identify from the image
  • the laser spot is emitted.
  • the user cannot determine whether the laser spot is projected on a black background or a transparent object, which greatly reduces the user's experience.
  • the power of the laser points is lower than 1mW. Therefore, the brightness of the laser points is not high.
  • the laser points with such brightness are not prominent when imaging, especially when the environmental background is bright. Because the difference between the brightness of the laser spot and the environment is relatively small, it is more difficult for the user to identify the laser spot from the environment.
  • the light-emitting object when there is a light-emitting object similar to a laser point in the home environment, such as a diamond pendant or earring of a family member, the light-emitting object is simultaneously displayed on the image, and the user may misjudge that the light-emitting object is a laser point.
  • the user recognizes the laser point with the naked eye, and clicks on the position of the laser point on the current screen to adjust the next position. In this way, the error is relatively large and the accuracy is not high.
  • the solution provided by the embodiment of the present disclosure can automatically identify the position of the laser spot in the image and mark the laser spot. Please refer to FIG. 1e, which uses a circle to surround the laser spot 140.
  • the circle can be of any color, such as green, orange, etc., and the laser spot is prominently marked with a clear mark.
  • FIG. 2 is a schematic flowchart of a laser point recognition method according to an embodiment of the present disclosure.
  • the laser point recognition method 200 includes:
  • the electronic device collects the first frame image and the second frame image, wherein one of the first frame image and the second frame image includes the laser spot emitted by the laser , The other frame does not contain the laser spot;
  • the first frame image and the second frame image may be obtained by the camera module 15 continuously photographing the surrounding environment, wherein the word “continuous” means that no additional image frame is inserted between the two frames of image.
  • the first frame image includes the laser point emitted by the laser
  • the second frame image does not include the laser point
  • the first frame image does not include the laser point emitted by the laser.
  • the two frames of images contain laser spots. The following assumes that the first frame of the image contains the laser spot emitted by the laser, and the second frame of the image does not include the laser spot. It should be noted that the assumptions made here are not used to limit the scope of protection of the present disclosure.
  • the camera module 15 starts acquiring a first frame image when the laser 16 projects a laser light, where the first frame image includes a laser point.
  • the camera module 15 acquires a second frame image again, and the second frame image does not include a laser point.
  • the shooting frame rate of the camera module 15 is relatively high.
  • the frame rate of the camera module 15 is 25-30 frames / second, and the time interval between the first frame image and the second frame image is 33-40 milliseconds. Because the shooting speed is relatively fast, except for the laser point from the laser point to the laser point, the other changes between the two frames are relatively small or unchanged.
  • the control unit 11 uses a low-pass average filtering method to process the first frame image and the second frame image respectively, and filters out the camera module 15 when shooting the first frame image and the second frame. Some random high-frequency noise generated in the image. During filtering, the control unit 11 may select a 7x7 low-pass average filter to process the two frames of images. It should be noted that the embodiments of the present disclosure are not limited to filtering other types of random noise generated by the images.
  • the laser 16 projects a laser spot 140 on the pet 120, and the pet 120 pays attention to the laser spot 140.
  • the control unit 11 activates the laser 16 to project the laser point 140 on the pet 120, and the camera module 15 captures a first frame image.
  • the first frame image includes the pet 120 and the laser point 140.
  • the control unit 11 turns off the laser 16, the laser 16 stops projecting the laser spot 140 to the pet 120, and the camera module 15 captures a second frame image, which includes the pet 120 but does not include the laser spot 140.
  • the control unit 11 acquires the above-mentioned two consecutive frames of images.
  • the electronic device identifies the position of the laser spot in the first frame image or the second frame image based on the pixel difference information of the collected first frame image and the collected second frame image.
  • the pixel difference information includes pixel difference values between pixels corresponding to the same coordinates of the first frame image and the second frame image.
  • the two frames of the image are changed from including laser points to not including laser points. Other changes are relatively small or unchanged. Therefore, the pixel difference information is mainly whether there is a pixel difference caused by the laser spot image between the two frames of the image.
  • the control unit 11 compares the one-frame image with the second-frame image according to the image analysis algorithm, and recognizes the laser spot. Further, the control unit 11 determines the position of the laser point in the first frame image by calculating the pixel coordinate point of the laser point in the first frame image.
  • the pet device 100 can adapt to complex imaging environments and quickly and accurately identify the laser points from the first frame of images. position.
  • S22 includes:
  • the electronic device generates a frame difference image based on the pixel difference information of the collected first frame image and the collected second frame image.
  • the electronic device identifies the position of the laser spot in the first frame image or the second frame image according to the frame difference image.
  • the frame difference image is an image obtained by performing a pixel frame difference operation between the first frame image and the second frame image.
  • the frame difference image retains the laser spot image. Because the changes of the two frames of the image except the laser spot are obvious, the changes of other objects are not obvious.
  • By performing the pixel frame difference operation between the first frame image and the second frame image it can quickly filter out interference images that do not meet the imaging characteristics of the laser spot. , And at least the laser spot image is retained.
  • the control unit 11 When the background of the environment in which the laser point is located is a monochrome background, for example, the background is a white background or a black background, the control unit 11 performs a pixel frame difference operation between the first frame image and the second frame image, thereby filtering out the imaging area of the laser point. , So that only the laser spot imaging area is left in the frame difference image, so the control unit 11 can quickly and accurately identify the position of the laser spot in the first frame image.
  • FIG. 3a is an embodiment of the present disclosure to provide a first frame image including a laser point in a complex imaging environment.
  • the disclosed embodiment provides a second frame image that does not contain a laser point in a complex imaging environment.
  • the control unit 11 filters out most non-laser point imaging areas by performing pixel frame difference operations on the first frame image and the second frame image. Interference image area, thereby preparing for subsequent rapid and accurate identification of the position of the laser spot in the first frame of image.
  • control unit 11 After the control unit 11 obtains the frame difference image, it selects an image area corresponding to the laser point imaging from the frame difference image according to the laser point image conditions, and uses the image area as a laser point to identify the laser point in the first frame image s position.
  • the method of generating the frame difference image may be various image processing methods.
  • the electronic device can quickly and accurately generate a frame difference image according to the color characteristics of the laser spot.
  • S221 includes: performing a pixel frame difference operation between the first frame image and the second frame image to generate a frame difference map, and each frame difference pixel in the frame difference map meets at least the following conditions:
  • the target color components in the pixels corresponding to the same coordinates in the frame difference image are removed.
  • the target color components in the pixels corresponding to the same coordinates in the frame difference image are set to The second preset color value.
  • the first preset color value is different from the second preset color value.
  • the first preset color value is different from the second preset color value, and the first preset color value and the second preset color value can be customized by the user, for example, the first preset color value is 255, and the second preset color value is Set the color value to 0.
  • the color of the laser spot corresponds to the target color component.
  • the basic feature of a laser spot is that the center is bright and white, and the color of the laser spot can be various, such as red, green, blue, and so on.
  • Different types of laser dots have different target color components. For example, when the color of the laser dot is red, the target color component is red, the non-target color components are blue and green components; the color of the laser dot is green.
  • the target color component is a green component
  • the non-target color component is a blue component and a red component; when the color of the laser spot is blue, the target color component is a blue component, and the non-target color component is a red component and a green component.
  • the target color component of the pixel of the laser point in the first frame image A target color component larger than a pixel in the second frame image.
  • the control unit 11 performs pixel frame difference calculation on the first frame image and the second frame image according to the target color component, and quickly and effectively filters out some interference images, and at least the image of the laser spot is retained.
  • the color of the laser spot is red
  • the target color component is a red component
  • the non-target color component is a green component and a blue component.
  • the control unit 11 determines each color component of each pixel in the first frame image and the second frame. Whether the relationship between the color components of each pixel in the image meets the following conditions to generate a frame difference image, the following conditions are as follows:
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the frame The red component of a pixel in the difference image is a first preset color value;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than the preset frame difference threshold, the red component of the pixel in the frame difference image is The second preset color value.
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, and each of the first frame images
  • the difference between the green component of the pixel minus the green component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold
  • the green component of the pixel in the frame difference image is a third preset color value.
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, or each pixel in the first frame image
  • the difference between the green component minus the green component of each pixel in the second frame image is less than the preset frame difference threshold
  • the green component of the pixel in the frame difference image is the fourth preset color value
  • the third preset color value and the first Four preset color values are different.
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold
  • each of the first frame images When the difference between the blue component of the pixel minus the blue component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the blue component of the pixel in the frame difference image is a fifth preset color value.
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, or each pixel in the first frame image
  • the blue component of the pixel in the frame difference image is the sixth preset color value and the fifth preset color. The value is different from the sixth preset color value.
  • the first preset color value to the sixth preset color value are user-defined.
  • R th is the frame difference threshold
  • F r (x, y), F g (x, y), and F b (x, y) are the red components of pixels in row x and column y in the first frame image
  • S r (x, y), S g (x, y), and S b (x, y) are the red component and green component of pixels in row x and column y in the second frame image, respectively
  • blue components, D r (x, y), D g (x, y), and D b (x, y) are the red component, green component, and blue component of the pixel in the x-y column of the frame difference map, respectively.
  • Weight is the frame difference threshold
  • F r (x, y), F g (x, y), and F b (x, y) are the red components of pixels in row x and column y in the first frame image
  • the control unit 11 obtains each pixel value of the frame difference map according to Formula 1, and generates a frame difference map according to each pixel value of the frame difference map.
  • the frame difference threshold R th is used when the target color component of the pixel in the first frame image is larger than the target color component of the pixel in the second frame image, and at least the interference can be effectively filtered out in the frame difference map.
  • the frame difference threshold R th is the minimum effective difference value of the target color component of the pixel in the first frame image minus the target color component of the pixel in the second frame image.
  • the frame difference The threshold R th is the minimum effective value of the difference between the red components of the corresponding pixels in the two images.
  • the value of the frame difference threshold R th is [4, 10], and the frame difference threshold R th is selected by the user according to product requirements. In this embodiment, the frame difference threshold R th is 5.
  • the control unit 11 sets the frame-difference pixels after the frame-difference calculation of the corresponding pixels in the two frames It is black, which reduces the amount of calculation and interference later.
  • the color of the laser spot is red
  • the difference between the green component or the blue component of the corresponding pixel in the two images is greater than the frame difference threshold R th
  • the difference between the red color component of the corresponding pixel in the two frames of images is not greater than the frame The difference threshold R th .
  • the control unit 11 sets the frame difference pixels of the corresponding pixels in the two frames of images to be black after the frame difference calculation is completed.
  • FIG. 4 is a frame difference image obtained by performing pixel frame difference calculations of FIGS. 3 a and 3 b according to an embodiment of the present disclosure. It can be known from FIG. 4 that the frame difference image effectively filters out some interference images and at least retains the image of the laser point. It is worth noting that the original image of FIG. 4 is a color image. In order to meet the examination requirements of the patent drawings, The original image is converted into a grayscale image, as shown in Figure 4.
  • the frame difference image may only retain the image of the laser spot.
  • the above formula is adaptively changed when the target color component is a non-red component (either a green component or a blue component).
  • a non-red component either a green component or a blue component.
  • control unit 11 After the control unit 11 obtains the frame difference image, it can identify the position of the laser spot in the first frame image by selecting the image connected domain mode.
  • S222 includes:
  • each image (including the laser point image) retained in the frame difference image is still a color image.
  • the control unit 11 converts the frame difference image into a binary image. For example, when the target color component of a pixel in the frame difference image is When it is the first preset color value, the electronic device sets the pixel value of the pixel to the first preset pixel value. When the target color component of the pixel in the frame difference image is the second preset color value, the electronic device sets the pixel value of the pixel to the second preset pixel value, and the first preset pixel value is different from the second preset pixel value.
  • the first preset pixel value and the second preset pixel value can be defined by a user.
  • the red component of each pixel of the frame difference map has only two values, which are 0 or 255, respectively.
  • the control unit 11 converts the frame difference image into a binarized image, it may be based on the following formula 2:
  • B (x, y) is the luminance value of the pixels located in the x row and the y column in the binarized image.
  • FIG. 4b is a schematic diagram of the binarized image after the binarization process in FIG. As can be seen from Fig. 4b, the control unit 11 sets the red laser spot image to white.
  • the image connected domain refers to an image area composed of foreground pixels that have the same pixel value and are located adjacent to each other in the image.
  • the control unit 11 searches the connected region of the image from the binarized image according to the connected region analysis algorithm (Connected Component Analysis, Connected Component Analysis), where the connected region analysis algorithm can be Two-Pass (two-pass scanning method) or Seed Filling ( Seed filling method) and so on.
  • the connected region analysis algorithm can be Two-Pass (two-pass scanning method) or Seed Filling ( Seed filling method) and so on.
  • each image connected domain is enveloped with a preset shape in a frame difference image or a binary image. As shown in FIG. 4c, each image connected domain is rectangularly enveloped. It can be understood that the preset shape can be not only a rectangle, but also other suitable shapes, such as a circle, an oval, or a diamond.
  • the control unit 11 can process the binary image with a monochromatic background with a small amount of computation to identify the position of the laser point in the first frame image. .
  • the control unit 11 preliminarily screens out the image connected domains that satisfy the basic shape conditions of the laser spot according to the preset shape of the envelope image connected domains.
  • the preset shape is a rectangle.
  • CC W is the width of the rectangle
  • CC H is the height of the rectangle
  • CC A is the area of the image connectivity domain.
  • D min is a preset minimum value corresponding to the width or height of the rectangle, and the value range is [1,10], and the value is 2 in this embodiment.
  • D max is a preset maximum value corresponding to the width or height of the rectangle, and the value range is [40, 70], and the value is 60 in this embodiment.
  • a min is a preset minimum area corresponding to the image connected domain, and the value range is [10,60], and the value is 20 in this embodiment.
  • the connected area of the image is retained.
  • the retention mode may be that the connected area of the rectangular envelope image is still used, as shown in Figure 4d. If it is not satisfied, the image connected domain is discarded.
  • the discarding method may be that the rectangular envelope image connected domain is no longer used.
  • control unit 11 when there are multiple lasers 16 projecting multiple laser points, the control unit 11 selects one or more image connected regions that satisfy the laser point image conditions as laser points from one or more image connected regions. Therefore, It can identify multiple laser spots.
  • control unit 11 can filter out the connected areas of the image that obviously do not meet the basic shape conditions of the laser spot, so as to reduce the amount of subsequent image analysis and calculation.
  • S2222 includes:
  • the electronic device calculates a weight of each image connected domain
  • the electronic device compares respective weights of any two of the image connectivity domains.
  • the electronic device traverses the image connected domain with the highest weight from one or more image connected domains, and selects the image connected domain with the highest weight as the laser spot.
  • the types of image connectivity domains include the connectivity domain being processed and the current candidate connectivity domain. Both the connectivity domain being processed and the current candidate connectivity domain include several feature values, each feature value corresponding to a weight.
  • the control unit 11 obtains the weight of each image connected domain by counting the weight of each feature value in each image connected domain. Therefore, in some embodiments, when calculating the weight of each image connected domain, referring to FIG. 5a, S51 includes:
  • the electronic device calculates a weight of each feature value in the image connectivity domain being processed or the current candidate image connectivity domain;
  • the electronic device jointly adds the weights of several feature values to obtain the weights of the image connectivity domain being processed or the current candidate image connectivity domain.
  • Each image connected domain can use one or two or more feature values to describe the characteristics of the image connected domain from various dimensions.
  • the following describes the features of each image connected field from 6 dimensions, however, those skilled in the art can understand that for the purpose of identifying the position of the laser point in the first frame of image, those skilled in the art can select a feature
  • the values can express the characteristics of the connected area of the image. You can also choose two feature values to comprehensively express the features of the connected area of the image. You can also select multiple feature values to comprehensively and comprehensively express the features of the connected area of the image. For the position in the first frame image, those skilled in the art may select any number of feature values, or may combine any feature value to identify the position of the laser point in the first frame image.
  • the six feature values provided in this embodiment may be further selected to express the features of the image connected domain.
  • those skilled in the art make any replacement or change based on the content of the training in this embodiment, which should fall within the protection scope of this disclosure.
  • Center brightness difference Is the absolute value of the difference between the brightness of the corresponding pixel point in the first frame image and the brightness of the corresponding pixel point in the second frame image.
  • the center of the image connected domain is the pixel with the highest brightness in the image connected domain.
  • the center brightness of the corresponding image connected domain is also the largest.
  • the difference between the brightness of the corresponding pixel point in the first frame image and the brightness of the corresponding pixel point in the second frame image is relatively large and prominent.
  • the difference between the brightness of the corresponding pixel point in the first frame image and the brightness of the corresponding pixel point in the second frame image is relatively small. Therefore, by comparing the central brightness difference value, it can improve The probability of the connected area of the image corresponding to the laser point is filtered out.
  • the target color pixel ratio R 1 is the ratio of the target color pixels to all pixels in the image connected domain in the area corresponding to the image connected domain in the first frame of image. For example, when the color of the laser spot is red, the target color pixel is a red pixel, where the red pixel is defined as: a pixel whose HSV value is between (140,40,200) and (179,255,255)), so the control unit 11 Then, the red pixel ratio R 1 can be calculated.
  • the target color pixel ratio R 1 in the image connected domain corresponding to the non-laser spot image also does not change much.
  • the first frame color image communication target pixel domain image corresponding to the non-image of the laser spot as a percentage of at least R 1 is not larger than the target color pixels of the image of the second communication domain in a non-frame image corresponding to the image of the laser spot as a percentage of R 1,
  • the target color pixels of the laser spot image of the first frame image corresponding to the image communication domain account as a percentage of R 1 is greater than the target image pixel color laser image corresponding to the second communication domain frame image ratio R 1 . Therefore, in this way, it is possible to increase the probability of filtering out the image connected domain corresponding to the laser point.
  • a white pixel is defined as a pixel whose grayscale value is greater than W th .
  • the value range of W th is [200,250], and the value of this embodiment is 210.
  • the width-to-height ratio R wh of the rectangular frame envelops each image connected domain within the binarized frame difference image.
  • the expression of the width-to-height ratio R wh of the rectangular frame is:
  • CC W is the width of the rectangle
  • CC H is the height of the rectangle.
  • control unit 11 selects R wh as CC w / CC H.
  • control unit 11 selects R wh as CC H / CC W.
  • the target color component ratio R 2 is the ratio of each color pixel in the area corresponding to the image connected area of the frame difference map excluding white, and based on the target color component, to all pixels in the connected area of the image. For example, when the color of the laser spot is red, the target color component ratio R 2 is the ratio R of red, pink, and yellow pixels to all pixels in the image connectivity domain in the area corresponding to the frame difference map and the image connectivity domain. rpy .
  • the central pixel brightness B c of the connected domain is the brightness of the corresponding pixel point in the center of the image in the first frame of the image.
  • this article assigns different weights to each feature value, for example, it assigns a bit corresponding to an integer.
  • the weights of all the eigenvalues are added to form the weight of the image connected domain, and the image connected domain with a large weight is selected as the candidate connected domain.
  • the central brightness difference value of the connected domain of the image being processed the weight of,
  • the central brightness difference value of the connected domain of the image being processed the weight of
  • W (R 1 (cur)) is the weight of the target color pixel ratio R 1 of the connected region of the image being processed
  • R 1 (cur) is the target color pixel R 1 , R of the connected region of the image being processed
  • 1 (can) is the target color pixel ratio R 1 of the connected region of the current candidate image.
  • the control unit 11 selects W (R 1 (cur)) as “0x100”.
  • R 1 (cur) ⁇ R 1 (can) the control unit 11 selects W (R 1 (cur)) as “0”.
  • the target color pixel is a red pixel, where the red pixel is defined as: a pixel with an HSV value between (140,40,200) and (179,255,255)), so the control unit 11 can The red pixel ratio R 1 is calculated.
  • W (R w (cur) ) as a white pixel image communication domain being processed preemption ratio R w weight, R w (cur) representing the ratio R w white pixels of an image communication domain being processed, R w ( can) is the proportion of white pixels R w in the connected region of the current candidate image.
  • W (Join (cur)) is the weight of the jointly added eigenvalues R wh , R 2, and B c
  • R wh (cur) is the width-to-height ratio of the rectangular frame of the connected domain of the image being processed
  • R wh (can) is The width-to-height ratio of the rectangular frame of the connected region of the current candidate image
  • R 2 (cur) is the ratio of the target color component of the connected region of the image being processed
  • B c (cur) is the center pixel brightness of the connected region of the image being processed
  • B c (can) is the luminance of the central pixel of the connected domain of the current candidate image connected domain.
  • control unit 11 calculates the weight of each feature value in the connected domain of the image being processed, it determines the weight corresponding to the selection according to the above expressions.
  • control unit 11 After the control unit 11 obtains the weight of each eigenvalue, the weight of each eigenvalue is added together to obtain the weight of the connected domain being processed.
  • the control unit 11 is based on the following expression:
  • the central brightness difference value of the current candidate connected domain the weight of,
  • W (R 1 (can)) is the weight of the target color pixel of the current candidate connected domain to the ratio R 1 .
  • W (R w (can)) is the weight of the white pixel ratio R w of the current candidate connected domain.
  • W (Join (can)) is the weight of the jointly added eigenvalues R wh , R 2 and B c .
  • control unit 11 selects W (Join (can)) as “ 0x80 ".
  • control unit 11 selects W (Join (can)) as " 0 ".
  • control unit 11 calculates the weight of each feature value in the current candidate connected domain, it determines the weight corresponding to the selection according to the above expressions.
  • control unit 11 After the control unit 11 obtains the weight of each eigenvalue, the weight of each eigenvalue is added together to obtain the weight of the current candidate connected domain.
  • the control unit 11 is based on the following expression:
  • control unit 11 After the control unit 11 obtains the weights corresponding to the connected areas of the image (the connected area being processed or the current candidate connected area), in some embodiments, the control unit 11 may adopt an elimination mechanism to traverse all the connected areas of the image that meet the basic requirements.
  • the processed connected domain is compared with the current candidate connected domain, and according to the comparison result, the image connected domain with the highest weight is traversed from several image connected domains. Therefore, in some embodiments, referring to FIG. 5b, S53 includes:
  • the electronic device determines whether the weight of the image connectivity domain being processed is greater than the weight of the current candidate image connectivity domain.
  • the electronic device replaces the image connectivity domain being processed with the current candidate image connectivity domain and uses it as the new current candidate image connectivity domain;
  • the electronic device discards the image connectivity domain being processed and retains the current image candidate connectivity domain
  • S534 The electronic device determines whether there is a next image connected domain being processed
  • the electronic device uses the next image connected domain being processed as the image connected domain being processed, and returns to continue to determine whether the weight of the image connected domain being processed is greater than the weight of the current candidate image connected domain;
  • the electronic device determines whether there is a current candidate image connected domain
  • the electronic device uses the current candidate image connected domain as the optimal image connected domain;
  • the electronic device determines that there is no optimal image connectivity domain.
  • the weight W is set to zero. If the weight of the image connectivity domain being processed is greater than the weight of the current candidate image connectivity domain, then the image connectivity domain being processed is set as the current candidate connectivity domain. If the weight of the connected domain of the image being processed is less than the weight of the connected domain of the current candidate image, discard the connected domain of the image being processed and continue to take out the next connected domain of the image for comparison.
  • control unit 11 After the control unit 11 recognizes the position of the laser spot in the first frame image, the control unit 11 may also mark the laser spot. Referring to FIG. 5c, the control unit 11 uses a circle of a certain color to surround the laser spot.
  • the “marking” is to highlight the laser spot, however, there are various ways to highlight the laser spot, for example, rendering a laser spot image, expanding a laser spot image, and so on.
  • the laser spot recognition method can be implemented as a set of instruction sets stored in a non-transitory storage medium of a pet device, or as a non-transitory storage stored in a user terminal. It can be implemented as a set of instruction sets in a medium, and can also be implemented as a set of instruction sets stored in a non-transitory storage medium of a cloud server or a local server. Therefore, the laser point recognition method can be applied not only to a pet device, but also to a user terminal or a cloud server or a local server.
  • the laser point recognition method of each of the above embodiments when the laser point recognition method of each of the above embodiments is applied to a pet device, it can transmit the marked image to a user terminal.
  • the user terminal may mark the laser point in the image locally.
  • the laser spot recognition method of each of the above embodiments is applied to a server, it can transmit the marked image to a user terminal.
  • the user when the user needs to adjust the position where the laser spot is projected at home, the user clicks a certain area in the user interface of the user terminal, so the user terminal sends a control instruction to the pet device.
  • the control unit receives the control instruction and controls the laser point to be projected to a target position according to the control instruction, where the target position corresponds to the certain area.
  • the user needs to adjust the laser spot to re-project on the desk at home, so the user clicks on the desk in the image under the user interface, and then the laser of the pet device projects the laser spot on the desk.
  • the embodiments of the present disclosure provide a laser spot device that is recognized by an electronic device and emitted by a laser.
  • the laser point device emitted by the electronic device identification laser may be used as one of the software function units.
  • the laser point device emitted by the electronic device identification laser includes several instructions, the several instructions are stored in a memory, and the processor may access the memory. The instruction is called for execution to complete the laser point identification method emitted by the electronic device.
  • a laser spot device 600 emitted by an electronic device identification laser includes: a collection module 61 and an identification module 62.
  • the acquisition module 61 is configured to acquire a first frame image and a second frame image in two states of the laser emitting laser and the suspended laser emitting, wherein one of the first frame image and the second frame image includes the laser emitted by the laser Point, the other frame does not contain the laser point;
  • the identification module 62 is configured to identify the position of the laser spot in the first frame image or the second frame image based on the pixel difference information of the collected first frame image and the collected second frame image.
  • the identification module 62 includes a generation unit 621 and an identification unit 622.
  • the generating unit 621 is configured to generate a frame difference image based on pixel difference information of the captured first frame image and the captured second frame image.
  • the identifying unit 622 is configured to identify the position of the laser spot in the first frame image or the second frame image according to the frame difference image.
  • the color of the laser spot corresponds to the target color component
  • the first frame image includes the laser spot
  • the second frame image does not include the laser spot.
  • the generating unit 621 is specifically configured to perform a pixel frame difference operation between the first frame image and the second frame image to generate a frame difference map. Each frame difference pixel in the frame difference map meets at least the following conditions:
  • the target color components in the pixels corresponding to the same coordinates in the frame difference image are set to The second preset color value is different from the second preset color value.
  • the color of the laser spot is red, the target color component is the red component, and the non-target color component is the green component and the blue component;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the red component of the pixel in the frame difference image is A first preset color value;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than the preset frame difference threshold, the red component of the pixel in the frame difference image is the second Preset color value;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, and the difference between the green component minus the green component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, the green component of the pixel in the frame difference image is a third preset color value;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, or the green of each pixel in the first frame image
  • the difference between the component minus the green component of each pixel in the second frame image is less than the preset frame difference threshold
  • the green component of the pixel in the frame difference image is the fourth preset color value
  • the third preset color value and the fourth preset color value Let the color values be different
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is greater than or equal to a preset frame difference threshold, and the difference between the blue component minus the blue component of each pixel in the second frame image is greater than or equal to the preset frame difference threshold, the blue component of the pixel in the frame difference image is the fifth preset color value;
  • the difference between the red component of each pixel in the first frame image minus the red component of each pixel in the second frame image is less than a preset frame difference threshold, or the blue difference of each pixel in the first frame image
  • the difference between the color component minus the blue component of each pixel in the second frame image is less than the preset frame difference threshold
  • the blue component of the pixel in the frame difference image is the sixth preset color value
  • the fifth preset color value and The sixth preset color value is different.
  • the identification unit 622 is specifically configured to: convert the frame difference image into a binarized image; search for one or more image connected domains from the binarized image; and the electronic device selects from the one or more image connected domains One or more connected areas of the image satisfying the conditions of the laser point image are used as the laser points; obtaining coordinate positions of the selected one or more connected areas of the image in the first frame image or the second frame image.
  • the identifying unit 622 is further specifically configured to: when the target color component of the pixel in the frame difference image is the first preset color value, set the pixel value of the pixel to the first preset pixel value; when the frame difference is When the target color component of the pixel in the image is the second preset color value, the pixel value of the pixel is set to the second preset pixel value, and the first preset pixel value is different from the second preset pixel value.
  • the recognition unit 622 is further specifically configured to: calculate the weight of each image connected domain; compare the respective weights of any two image connected domains; and traverse from one or more image connected domains according to the comparison result The image connected domain with the highest weight is selected as the laser spot.
  • the type of the image connectivity domain includes the image connectivity domain being processed and the current candidate image connectivity domain, and both the image connectivity domain being processed and the current candidate image connectivity domain include several feature values, each feature value corresponding to a weight
  • the recognition unit 622 is further specifically configured to: calculate the weight of each feature value in the image connected domain being processed or the current candidate image connected domain; jointly add the weights of several feature values to obtain the image connected domain being processed or the current The weight of the candidate image connected domain.
  • the identification unit 622 is further specifically configured to: determine whether the weight of the image connected domain being processed is greater than the weight of the current candidate image connected domain; if it is greater, replace the image connected domain being processed with the current candidate image connected domain And as the new current candidate image connected domain; if smaller, discard the image connected domain being processed and retain the current candidate image connected domain; determine whether there is the next image connected domain being processed; if it exists, the next one The image connected domain is used as the image connected domain being processed, and returns to continue to determine whether the weight of the image connected domain being processed is greater than the weight of the current candidate image connected domain; if it does not exist, determine whether the current candidate image connected domain exists; if it exists, the The current candidate image connected domain is used as the optimal image connected domain; if it does not exist, it is determined that there is no optimal image connected domain.
  • the type of the feature value includes any one or more of the following: a central brightness difference value
  • the target color pixel ratio R 1 the white pixel ratio R w , the width-to-height ratio R wh of the rectangular frame, the target color component ratio R 2, and the center pixel luminance B c of the connected domain.
  • Center brightness difference Is the absolute value of the difference between the brightness of the corresponding pixel point in the first frame image and the brightness of the corresponding pixel point in the second frame image.
  • the target color pixel ratio R 1 is the ratio of the target color pixels to all pixels in the image connected domain in the area corresponding to the image connected domain in the first frame of the image;
  • the rectangular frame envelops each image connected domain within the binarized frame difference image
  • the target color component ratio R 2 is the ratio of each color pixel in the area corresponding to the image connected domain except for white in the frame difference map and the target color component to all pixels in the image connected domain;
  • the pixel brightness B c of the center of the connected domain is the brightness of the corresponding pixel point in the center of the image in the first frame of the image.
  • the target color component ratio R 2 is the ratio of red, pink, and yellow pixels in the area of the frame difference map corresponding to the image connected domain to all pixels in the image connected domain.
  • each image connected domain is enveloped with a preset shape within a frame difference image or a binarized image.
  • the recognition unit 622 is further specifically configured to: according to a preset shape of the enveloping image connected field, preliminary select an image connected field that satisfies a basic shape condition of the laser spot.
  • the preset shape includes a rectangle; the identification unit 622 is further specifically configured to:
  • CC W is the width of the rectangle
  • CC H is the height of the rectangle
  • CC A is the area of the image connection domain
  • D min is the preset minimum value corresponding to the width or height of the rectangle
  • D max is the width or height corresponding to the rectangle.
  • the preset maximum value, A min is a preset minimum area corresponding to the image connected domain.
  • the identification module 62 further includes a filtering unit 623.
  • the filtering unit 623 is configured to process the first frame image and the second frame image by using a low-pass average filtering method, respectively.
  • the laser spot recognition device 600 further includes a marking module 63.
  • the marking module 63 is used for marking a laser spot.
  • the laser spot recognition device 600 further includes: an obtaining module 64 and a control module 65.
  • the obtaining module 64 is configured to obtain a control instruction
  • the control module 65 is configured to control the laser point to be projected to the target position according to the control instruction.
  • the above-mentioned laser spot recognition device can execute the laser spot recognition method emitted by the electronic device provided by the embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
  • the technical details that are not described in detail in the embodiment of the electronic device identifying the laser point device emitted by the laser refer to the method for identifying the laser point emitted by the electronic device provided by the embodiment of the present disclosure.
  • the embodiments of the present disclosure provide an electronic device.
  • the electronic device 700 includes: one or more processors 71 and a memory 72.
  • processors 71 is taken as an example in FIG. 7.
  • the processor 71 and the memory 72 may be connected through a bus or in other manners. In FIG. 7, the connection through the bus is taken as an example.
  • the memory 72 is a non-volatile computer-readable storage medium and can be used to store non-volatile software programs, non-volatile computer executable programs, and modules, such as the laser emitted by the electronic device identification laser in the embodiment of the present disclosure.
  • Program instructions / modules corresponding to point methods.
  • the processor 71 runs the non-volatile software programs, instructions, and modules stored in the memory 72, thereby executing the method for identifying the laser spot emitted by the electronic device of each embodiment described above, or identifying the laser emission of the electronic device of each embodiment described above Various functions of the laser point device and data processing.
  • the memory 72 may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid-state storage device.
  • the memory 72 may optionally include a memory remotely set relative to the processor 71, and these remote memories may be connected to the processor 71 through a network. Examples of the above network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the program instructions / modules are stored in the memory 72, and when executed by the one or more processors 71, perform the method for identifying a laser spot emitted by a laser by the electronic device in any of the foregoing method embodiments, for example, to execute The method for recognizing the laser spot emitted by the laser by the electronic device of each of the foregoing embodiments, or the various functional applications and data processing of the laser spot device emitted by the electronic device of the above embodiments by identifying the laser.
  • An embodiment of the present disclosure also provides a non-transitory computer-readable storage medium, where the non-transitory computer-readable storage medium stores computer-executable instructions, and the computer-executable instructions are used to cause an electronic device to execute any one of the above.
  • the method for identifying a laser spot emitted by a laser by an electronic device according to the item.
  • An embodiment of the present disclosure provides a computer program product.
  • the computer program product includes a computer program stored on a non-volatile computer-readable storage medium.
  • the computer program includes program instructions. When the program instructions are executed by an electronic device, When executed, the electronic device is caused to perform the method for identifying a laser spot emitted by a laser according to any one of the electronic devices.
  • the embodiments of the device or device described above are only schematic, and the unit modules described as separate components may or may not be physically separated, and the components displayed as module units may or may not be physical units. , Which can be located in one place or distributed to multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the objective of the solution of this embodiment.

Landscapes

  • Life Sciences & Earth Sciences (AREA)
  • Environmental Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Birds (AREA)
  • Animal Husbandry (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

一种电子设备识别激光器发射的激光点的方法及电子设备,电子设备包括:摄像模组(15);激光器(16);至少一个处理器(71);与至少一个处理器(71)通信连接的存储器(72);存储器(72)存储有可被至少一个处理器(71)执行的指令,指令被至少一个处理器(71)执行,使至少一个处理器(71)能够用于执行:在激光器(16)发射激光和暂停发射激光两种状态下,采集第一帧图像和第二帧图像,第一帧图像和第二帧图像中一帧图像包含激光器(16)发射的激光点,另一帧不包含激光点;基于第一帧图像与第二帧图像的像素差异信息,识别出激光点在第一帧图像或第二帧图像中的位置。因此,其能够适应复杂成像环境,快速地、准确地从第一帧图像或第二帧图像中识别出激光点的位置。

Description

电子设备识别激光器发射的激光点方法及电子设备 技术领域
本公开涉及图像分析技术领域,特别是涉及一种电子设备识别激光器发射的激光点方法及电子设备。
背景技术
喂食与互动娱乐是宠物机器人两个关键核心功能,其中,互动娱乐的方式包括双向通话互动与激光互动。当宠物主人不在家时,宠物主人可以随时随地通过智能手机APP观看家中宠物的活动状态,并通过激光互动方式与宠物展开互动娱乐,例如,当用户点击手机屏幕上的A位置时,相对应地,激光点会投射在家里与A位置对应的位置。
为了控制激光点在家里投射的位置,用户需要在手机屏幕的图像上寻找激光点,传统技术是用户手动寻找激光点。然而,受限于家里复杂的成像环境或激光点的发光功率,激光点在图像中的被识别率不高,用户未能够有效地控制激光点投射的位置。
发明内容
本公开实施例一个目的旨在提供一种电子设备识别激光器发射的激光点方法及电子设备,其识别激光点的准确率比较高。
为解决上述技术问题,本公开实施例提供以下技术方案:
在第一方面,本公开实施例提供一种电子设备,包括:
摄像模组;
激光器;
至少一个处理器;以及
与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行:
在激光器发射激光和暂停发射激光的两种状态下,采集第一帧图像和第二帧图像,其中,所述第一帧图像和第二帧图像中一帧图像包含所述激光器发射的激光点,另一帧不包含所述激光点;
基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置。
可选地,所述摄像模组包括摄像头与驱动组件,所述驱动组件与所述摄像头连接,所述驱动组件用于驱动摄像头移动。
可选地,所述驱动组件包括线性电机、旋转电机、滑轨及支架,所述线性电机的输出端与所述旋转电机的固定端连接,所述支架固定安装于所述旋转电机的输出端,所述摄像头与所述激光器皆安装于所述支架上,所述摄像头收容于所述滑轨内并可沿着所述滑轨自由移动。
可选地,所述摄像模组包括补光组件,所述补光组件用于在所述摄像头拍摄图像时,为所述摄像头补光。
可选地,所述摄像头包括一个或多个光学传感器与镜头,所述一个或多个光学传感器设置于镜头的成像面。
可选地,所述基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,生成帧差图像;
根据所述帧差图像,识别出所述激光点在所述第一帧图像或第二帧图像中的位置。
可选地,所述根据所述帧差图像,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
将所述帧差图像转换成二值化图像;
从所述二值化图像中搜索一个或多个图像连通域;
从所述一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点;
获取选择的一个或多个图像连通域在所述第一帧图像或第二帧图像中的坐标位置。
在第二方面,本公开实施例提供一种电子设备识别激光器发射的激光点方法,包括:
在激光器发射激光和暂停发射激光的两种状态下,所述电子设备采集第一帧图像和第二帧图像,其中,所述第一帧图像和第二帧图像中一帧图像包含所述激光器发射的激光点,另一帧不包含所述激光点;
所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置。
可选地,所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,生成帧差图像;
所述电子设备根据所述帧差图像,识别出所述激光点在所述第一帧图像或第二帧图像中的位置。
可选地,所述激光点的颜色对应目标颜色分量,所述第一帧图像包含所述激光点,所述第二帧图像不包含所述激光点;
所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,生成帧差图像,包括:
将所述第一帧图像与所述第二帧图像作像素帧差运算,生成帧差图,所述帧差图中每个帧差像素至少满足以下条件:
当所述第一帧图像与所述第二帧图像同一坐标的像素中目标颜色分量相减后的差值大于或等于预设帧差阈值时,所述帧差图像中与所述同一坐标对应的像素中的目标颜色分量被置为第一预设颜色值;
当所述第一帧图像与所述第二帧图像同一坐标的像素中目标颜色分量相减后的差值小于预设帧差阈值时,所述帧差图像中与所述同一坐标对应的像素中的目标颜色分量被置为第二预设颜色值,所述第一预设颜色值与所述第二预设颜色值不同。
可选地,所述激光点的颜色为红色,目标颜色分量为红色分量,非目标颜色分量为绿色分量与蓝色分量;
当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧 图像中每个像素的红色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的红色分量为第一预设颜色值;
当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差小于预设帧差阈值时,所述帧差图像中像素的红色分量为第二预设颜色值;
当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且所述第一帧图像中每个像素的绿色分量减去所述第二帧图像中每个像素的绿色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的绿色分量为第三预设颜色值;
当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,所述第一帧图像中每个像素的绿色分量减去所述第二帧图像中每个像素的绿色分量之差小于预设帧差阈值时,所述帧差图像中像素的绿色分量为第四预设颜色值;所述第三预设颜色值与所述第四预设颜色值不同;
当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且所述第一帧图像中每个像素的蓝色分量减去所述第二帧图像中每个像素的蓝色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的蓝色分量为第五预设颜色值;
当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,所述第一帧图像中每个像素的蓝色分量减去所述第二帧图像中每个像素的蓝色分量之差小于预设帧差阈值时,所述帧差图像中像素的蓝色分量为第六预设颜色值,所述第五预设颜色值与所述第六预设颜色值不同。
可选地,所述电子设备根据所述帧差图像,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
所述电子设备将所述帧差图像转换成二值化图像;
所述电子设备从所述二值化图像中搜索一个或多个图像连通域;
所述电子设备从所述一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点;
所述电子设备获取选择的一个或多个图像连通域在所述第一帧图像或第二 帧图像中的坐标位置。
可选地,所述电子设备将所述帧差图像转换成二值化图像,包括:
当所述帧差图像中像素的目标颜色分量为所述第一预设颜色值时,所述电子设备将所述像素的像素值置为第一预设像素值;
当所述帧差图像中像素的目标颜色分量为所述第二预设颜色值时,所述电子设备将所述像素的像素值置为第二预设像素值,所述第一预设像素值与所述第二预设像素值不同。
可选地,所述电子设备从所述一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点,包括:
所述电子设备计算出每个所述图像连通域的权重;
所述电子设备比较任意两个所述图像连通域的各自权重;
所述电子设备根据比较结果,从所述一个或多个图像连通域中遍历出权重最高的图像连通域,并选择所述权重最高的图像连通域作为激光点。
可选地,所述图像连通域的类型包括正在处理的图像连通域与当前候选图像连通域,所述正在处理的图像连通域与所述当前候选图像连通域均包括若干特征值,每个所述特征值均对应权重;
则所述电子设备计算出每个所述图像连通域的权图像重,包括:
所述电子设备计算出所述正在处理的图像连通域或所述当前候选图像连通域中每个所述特征值的权重;
所述电子设备联合相加若干所述特征值的权重,得到所述正在处理的图像连通域或所述当前候选图像连通域的权重。
可选地,所述电子设备根据比较结果,从所述一个或多个图像连通域中遍历出权重最高的图像连通域,包括:
所述电子设备判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;
若大于,所述电子设备将所述正在处理的图像连通域替换成当前候选图像连通域,并作为新的当前候选图像连通域;
若小于,所述电子设备丢弃所述正在处理的图像连通域,保留所述当前候选图像连通域;
所述电子设备判断是否存在下一个正在处理的图像连通域;
若存在,所述电子设备将下一个正在处理的图像连通域作为所述正在处理的图像连通域,并返回继续判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;
若未存在,所述电子设备判断是否存在当前候选图像连通域;
若存在,所述电子设备将所述当前候选图像连通域作为最优的图像连通域;
若未存在,所述电子设备判断出未存在最优的图像连通域。
可选地,所述特征值的类型包括以下任意一种或多种:
中心亮度差异值
Figure PCTCN2018093748-appb-000001
所述中心亮度差异值
Figure PCTCN2018093748-appb-000002
为图像连通域中心在所述第一帧图像中对应像素点的亮度与在所述第二帧图像中对应像素点的亮度之差的绝对值;
目标颜色像素占比率R 1,所述目标颜色像素占比率R 1为所述第一帧图像中与图像连通域对应的区域内,目标颜色像素占所述图像连通域中所有像素的比率;
白色像素占比率R w,所述第一帧图像中与图像连通域对应的区域内,白色像素占所述连通域中所有像素的比率;
矩形框的宽高比率R wh,所述矩形框在所述二值化帧差图像内包络每个所述图像连通域;
目标颜色分量占比率R 2,目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内除去白色之外,并以目标颜色分量为基准的各个颜色像素占图像连通域中所有像素的比率;
连通域中心像素亮度B c,连通域中心像素亮度B c为图像连通域中心在所述第一帧图像中对应像素点的亮度。
可选地,在所述激光点的颜色为红色的情况下,目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内红色、粉色、黄色像素占图像连通域中所有像素的比率。
可选地,所述方法还包括:
所述电子设备标记所述激光点。
在第三方面,本公开实施例提供一种计算机程序产品,所述计算机程序产品包括存储在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被电子设备执行时,使所述电子设备执行任一 项所述的电子设备识别激光器发射的激光点方法。
在第四方面,本公开实施例还提供了一种非易失性计算机可读存储介质,所述非易失性计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使电子设备执行任一项所述的电子设备识别激光器发射的激光点方法。
在本公开各个实施例提供的电子设备识别激光器发射的激光点方法及电子设备中,首先,在激光器发射激光和暂停发射激光的两种状态下,采集第一帧图像和第二帧图像,其中,第一帧图像和第二帧图像中一帧图像包含激光器发射的激光点,另一帧不包含所述激光点,于是,通过根据第一帧图像与第二帧图像的像素差异信息,其能够适应复杂成像环境,快速地、准确地从第一帧图像或第二帧图像中识别出激光点的位置。
附图说明
一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。
图1是本公开实施例提供一种逗宠设备的示意图;
图1a是图1中摄像模组的功能框图的示意图;
图1b是图1a中驱动组件的功能框图的示意图;
图1c是本公开实施例提供一种逗宠设备安装在住宅内的示意图;
图1d是本公开实施例提供一种用户终端的逗宠应用程序的用户界面示意图;
图1e是本公开实施例提供一种标记激光点的用户界面示意图;
图2是本公开实施例提供一种激光点识别方法的流程示意图;
图2a是本公开实施例提供一种逗宠设备与宠物互动的示意图;
图3是图2中S22的流程示意图;
图3a为本公开实施例提供一种在复杂成像环境下包含激光点的一帧图像;
图3b为本公开实施例提供一种在复杂成像环境下未包含激光点的另一帧图像;
图4是本公开实施例提供一种将图3a与图3b作像素帧差运算后的帧差图像;
图4a是图3中S222的流程示意图;
图4b是将图4作二值化处理后的二值化帧差图像的示意图;
图4c是图4中各个图像连通域被矩形包络的示意图;
图4d是图4中满足激光点的基础形状条件的图像连通域的示意图;
图5是图4a中S2222的流程示意图;
图5a是图5中S51的流程示意图;
图5b是图5中S53的流程示意图;
图5c是标记图3a中激光点的示意图;
图6是本公开实施例提供一种激光点识别装置的结构示意图;
图6a是图6中识别模块的一种结构示意图;
图6b是图6中识别模块的另一种结构示意图;
图6c是本公开另一实施例提供一种激光点识别装置的结构示意图;
图6d是本公开又另一实施例提供一种激光点识别装置的结构示意图;
图7是本公开实施例提供一种电子设备的结构示意图。
具体实施方式
为了使本公开的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本公开进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本公开,并不用于限定本公开。
本公开实施例的激光点识别方法可以在任何合适类型并具有运算能力的电子设备中执行,例如,在一些实施例中,电子设备可以为用户终端,用户终端包括智能手机、计算机、掌上电脑(Personal Digital Assistant,PDA)、平板电脑、智能手表或台式计算机等等。在另一些实施例中,电子设备亦可以为逗宠设备(例如宠物机器人、或者其他任一具备激光源的机器设备),逗宠设备配置在家里或者其它合适位置,逗宠设备接收用户的控制命令,调整激光点在家里的投射位置,从而实现逗宠。
本公开实施例的电子设备可以被构造成任何合适形状,当电子设备为逗宠设备时,逗宠设备可以固定安装在家里特定位置,亦可以如移动机器人,在家 里按照预设逻辑运动。
请参阅图1,逗宠设备100包括控制单元11、无线通信单元12、喂食单元13、音频单元14、摄像模组15及激光器16。
控制单元11作为逗宠设备100的控制核心,协调各个单元的工作。控制单元11可以为通用处理器(例如中央处理器CPU)、数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA、CPLD等)、单片机、ARM(Acorn RISC Machine)或其它可编程逻辑器件、分立门或晶体管逻辑、分立的硬件组件或者这些部件的任何组合。还有,控制单元11还可以是任何传统处理器、控制器、微控制器或状态机。控制单元11也可以被实现为计算设备的组合,例如,DSP和微处理器的组合、多个微处理器、一个或多个微处理器结合DSP核、或任何其它这种配置。
无线通信单元12用于与用户终端无线通信,无线通信单元12与控制单元11电连接。逗宠时,用户通过用户终端向逗宠设备100发送控制指令,无线通信单元12接收控制指令并向控制单元11发送该控制指令,控制单元11根据该控制指令控制逗宠设备100。
无线通信单元12包括广播接收模块、移动通信模块、无线互联网模块、短距离通信模块和定位信息模块的其中一种或多种的组合。其中,广播接收模块经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播接收模块可以使用数字广播系统来接收数字广播信号,数字广播系统诸如为地面数字多媒体广播(DMB-T)、卫星数字多媒体广播(DMB-S)、仅媒体前向链路(MediaFLO)、手持数字视频广播(DVB-H)或地面综合业务数字广播(ISDB-T)。
移动通信模块向移动通信网络上的基站、外部终端和服务器中的至少一方发送无线信号,或者可以从基站、外部终端和服务器中的至少一方接收无线信号。这里,根据字符/多媒体消息的接收和发送,无线信号可以包括语音呼叫信号、视频呼叫信号或各种形式的数据。
无线互联网模块指的是用于无线互联网连接的模块,并且可以内置或外置于终端。可以使用诸如无线LAN(WLAN)(Wi-Fi)、无线宽带(Wibro)、全球微波接入互操作性(Wimax)、高速下行分组接入(HSDPA)这样的无线互联网技术。
短距离通信模块指的是用于进行短距离通信的模块。可以使用诸如蓝牙(Bluetooth)、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)或ZigBee 这样的短距离通信技术。
定位信息模块是用于获得移动终端的位置的模块,例如全球定位系统(GPS)模块。
喂食单元13用于向宠物投放食物,喂食单元13与控制单元11电连接,控制单元11向喂食单元13发送喂食指令,喂食单元13根据喂食指令向宠物喂食。因此,当用户未能够实时在家照顾宠物时,用户可以远程控制喂食单元13投放食物,或者,喂食单元13可以按照预设时间定时向宠物投放食物。宠物不同,喂食单元13投放的食物亦不同,例如,宠物为狗时,喂食单元13投放狗粮;宠物为猫时,喂食单元13投放猫粮;宠物为鱼时,喂食单元13投放鱼粮。
用户可以根据产品需求,自行设计喂食单元13。例如,喂食单元12包括储物装置与驱动机构,储物装置用于承载食物,驱动机构用于控制储物装置投放食物。其中,储物装置可以为诸如转盘、储物盒等等装置,驱动机构包括电机、转动轴、齿轮传动机构及支架,电机与控制单元11电连接,电机的输出轴与转动轴一端连接,转动轴另一端与齿轮传动机构连接,并且,支架一端连接至齿轮传动机构,支架另一端与储物装置连接。电机根据控制单元11发送控制指令,驱动齿轮传动机构的工作状态。例如,当需要喂食时,电机驱动齿轮转动机构转动,齿轮传动机构通过支架带动储物装置移动,使得储物装置携带食物移动出逗宠设备的外部环境。储物装置移动至预设距离后,电机停止驱动齿轮传动机构转动。
音频单元14用于向采集逗宠设备100周围环境的声音,或者,推送声音,音频单元14与控制单元11电连接。
当用户通过语音方式与宠物交流时,用户通过用户终端发送语音信息,无线通信单元12接收该语音信息并向控制单元11发送该语音信息,控制单元11通过音频单元14将该语音信息推送出,例如,用户发出“Jerry,今天的午餐美味吗”,于是,音频单元14向逗宠设备100周围推送出“Jerry,今天的午餐美味吗”,宠物听到主人的语音,叫出一系列兴奋的声音。
或者,音频单元14采集宠物的声音,并将宠物的声音反馈至用户终端,用户听取宠物的声音,根据宠物的声音判断宠物的当前情绪,以便用户全方位及时了解宠物的当前生活状态。
在一些实施例中,音频单元14可以为喇叭、扬声器、麦克风等等电声换能 器,其中,喇叭或扬声器的数量可以为一个或多个,麦克风的数量可以为多个,多个麦克风可以构成麦克风阵列,以便有效地采集声音。麦克风可以是电动式的(动圈式、带式)、电容式的(直流极化式)、压电式的(晶体式、陶瓷式)、电磁式的、碳粒式的、半导体式的等或其任意组合。在一些实施例中,麦克风可以是微型机电系统(MEMS)麦克风。
摄像模组15用于拍摄宠物所处的环境,摄像模组15与控制单元11电连接,摄像模组15获取宠物所处环境的图像,并向控制单元11输出该图像,以便控制单元11根据该图像作出下一步逻辑运算。
请参阅图1a,摄像模组15包括摄像头151与驱动组件152,驱动组件152与摄像头151连接,驱动组件152用于驱动摄像头151移动,使得摄像头151能够多角度拍摄图像。
摄像头包括一个或多个光学传感器与镜头,一个或多个光学传感器设置于镜头的成像面,拍摄物体时,通过镜头将生成的光学图像投射至光学传感器上。
光学传感器包括电荷耦合设备(Charge-coupled Device,CCD)、互补金属氧化物半导体(Complementary Metal Oxide Semiconductor,CMOS),CMOS传感器可以为背照式CMOS传感器或堆栈式CMOS传感器。
在一些实施例中,摄像头还集成有ISP(Image Signal Processor,图像信号处理器),ISP用于处理光学传感器的输出数据,如做AEC(自动曝光控制)、AGC(自动增益控制)、AWB(自动白平衡)、色彩校正等等功能的处理。
在一些实施例中,请再参阅图1a,摄像模组15还包括补光组件153,补光组件153用于在摄像头151拍摄图像时,为摄像头151补光。例如,房间环境的光线不足时,控制单元11启动补光组件153发光,因此,摄像模组15能够更加清晰地拍摄物体。
补光组件153可以为LED灯等发光源。
激光器16用于投射激光点,激光器16与控制单元11电连接,控制单元11控制激光器16投射激光点的方向,进而调整了激光点的投射位置。
激光器16包括任意类型并且能够投射激光点的激光源,激光源包括固体激光器、气体激光器、液体激光器、半导体激光器、自由电子激光器等等。
激光器16发射激光线时,激光线受阻挡物的阻挡而在阻挡物的表面呈现一个激光区域,该激光区域可以为一定面积的发光区域,习惯称呼该激光区域为 激光点,亦即,激光点可以为一定面积的发光区域。
一般的,激光点的基本特征为中心高亮而呈白色,颜色为红色,形状为圆形或类似于圆形的类圆形(如规则或不规则椭圆形)等等形状。可以理解的是,激光点的四周颜色不仅可以为红色,而且还可以为绿色等等颜色。
激光器16与摄像模组15的相对位置是固定的,但是,摄像模组15相对于逗宠设备100的其它非摄像模组是可以自由移动的,例如,相对于逗宠设备100的外壳,摄像模组15可以上下旋转,于是,摄像模组15可以全方位地拍摄家里各个角度的图像,不过,激光器16与摄像模组15的相对位置保持固定不变。例如,请参阅图1b,驱动组件152包括线性电机1521、旋转电机1522、滑轨1523及支架1524,线性电机1521的输出端与旋转电机1522的固定端连接,支架1524固定安装于旋转电机1522的输出端,摄像头151与激光器16皆安装于支架1524上,摄像头151收容于滑轨1523内并可沿着滑轨1523自由上下移动。线性电机1521与旋转电机1522受控制单元11的控制。当线性电机1521正转时,线性电机1521的输出端带动旋转电机1522向上移动,旋转电机1522通过支架1524带动摄像头151与激光器16沿着滑轨1523向上移动。当线性电机1521反转时,线性电机1521的输出端带动旋转电机1522向下移动,旋转电机1522通过支架1524带动摄像头151与激光器16沿着滑轨1523向下移动。当旋转电机1522正转时,旋转电机1522的输出端通过支架1524带动摄像头151与激光器16顺时针旋转。当旋转电机1522反转时,旋转电机1522的输出端通过支架1524带动摄像头151与激光器16逆时针旋转。
宠物对激光比较感兴趣,当激光点投射在某一位置时,宠物追随着激光点而移动至该某一位置。于是,当激光点投射在不同位置时,宠物为了追逐激光点,会不断切换自身位置,从而实现用户与宠物之间的互动。
请一并参阅图1c与图1d,逗宠设备100安装在家里阳台110上,其中,家里养着宠物120,用户150操作用户终端130,通过互联网远程控制逗宠设备100。
用户终端130支持各种桌面应用程序的安装,诸如以下桌面应用程序中的一个或者多个桌面应用程序:逗宠应用程序、绘图应用程序、演示应用程序、文字处理应用程序、电子表格应用程序、游戏应用程序、电话应用程序、视频会议应用程序、电子邮件应用程序、即时消息应用程序、训练支持应用程序、照片应用程序、数码相机应用程序、数码录像机应用程序、网页浏览应用程序、 数字音乐播放器应用程序以及数字视频播放器应用程序等等。
逗宠时,用户操作用户终端130,打开逗宠应用程序,如图1d所示,逗宠应用程序的用户界面300用于呈现逗宠设备100拍摄宠物所处的当前环境,用户界面300包括多个虚拟按键,诸如返回按键31、设置按键32、拍照按键33、视频按键34、喂食按键35、音频按键36或激光按键(图未示)。用户通过单击特定虚拟按键,便可以远程控制逗宠设备100工作特定状态,例如,用户单击喂食按键35,用户终端130向逗宠设备100发送喂食指令,喂食单元13根据该喂食指令开始喂食。
再例如,用户单击激光按键,用户终端130向逗宠设备100发送激光模式指令,激光器16根据激光模式指令开始投射激光,并且,无线通信单元12将投射有激光点的位置对应的图像发送至用户终端,通过用户终端的用户界面呈现出该图像。于是,在激光模式下,用户肉眼从该图像中识别出激光点所处的位置,需要调整激光点的投射位置时,用户在用户界面上其它地方单击,于是,逗宠设备100控制激光器16将激光点投射在家里与单击之处对应的位置。与此同时,摄像模组15跟随着激光器16的移动而移动。
然而,家里成像环境比较复杂,例如,家中存在大面积的黑色背景(例如,电视机未显示画面的屏幕),或者,存在透明物体(例如,窗户玻璃),一方面,用户无法从图像中识别出激光点,另一方面,用户无法确定激光点是否投射在黑色背景或透明物体上,极大降低用户的体验感。
再者,为了满足行业激光器标准,激光点的功率皆低于1mW,因此,激光点的亮度不高,如此亮度的激光点在成像时也并不突出,尤其在环境背景很亮时成像的,由于激光点与环境亮度差异比较小,用户比较难以从环境中识别出激光点。
再者,家里环境存在类似激光点的发光物体时,例如,家人的钻石吊坠或耳环,发光物体同时呈现在图像上,用户会误判该发光物体为激光点。
最后,用户肉眼识别激光点,并根据激光点在当前屏幕上的位置进行点击,以调整下一位置,此方式误差比较大,准确度不高。
总体而言,传统方式存在诸多不足。基于此,本公开实施例提供的方案能够自动识别出激光点在图像中的位置,并标记激光点。请参阅图1e,其使用圆圈圈住激光点140,圆圈可以为任意颜色,诸如绿色、橙色等等,以明显的标记 突出激光点。
请参阅图2,图2是本公开实施例提供一种激光点识别方法的流程示意图。如图2所示,激光点识别方法200包括:
S21、在激光器发射激光和暂停发射激光的两种状态下,电子设备采集第一帧图像和第二帧图像,其中,第一帧图像和第二帧图像中一帧图像包含激光器发射的激光点,另一帧不包含激光点;
第一帧图像与第二帧图像可以由摄像模组15连续拍摄周围环境而获得,其中,“连续”一词表示两帧图像之间并未插入额外图像帧。
在本实施例中,可以理解的是:第一帧图像包含激光器发射的激光点,第二帧图像不包含激光点;亦可以理解的是:第一帧图像不包含激光器发射的激光点,第二帧图像包含激光点。下文假设第一帧图像包含激光器发射的激光点,第二帧图像不包含激光点,需要说明的是,此处作出的假设并不用于限制本公开的保护范围。
首先,拍摄时,摄像模组15在激光器16投射激光时,开始获取第一帧图像,其中,这第一帧图像包含激光点。其次,激光器16停止投射激光时,摄像模组15再次获取第二帧图像,该第二帧图像未包含激光点。一般的,摄像模组15的拍摄帧率比较高,例如,摄像模组15的帧率为25-30帧/秒,上述第一帧图像与第二帧图像的时间间隔为33-40毫秒,由于拍摄速度比较快,上述两帧图像之间除了激光点从包含激光点逐变为不包含激光点,其它变化比较小或者无变化。
在一些实施例中,为了方便后续图像处理,控制单元11采用低通均值滤波方式分别处理第一帧图像与第二帧图像,滤除摄像模组15在拍摄该第一帧图像与第二帧图像时产生的一些随机高频噪声。滤波时,控制单元11可以选择7x7低通均值滤波器处理上述两帧图像,需要说明的是,本公开实施例不限于其他滤波方式对图像产生的随机噪声进行滤除。
请参阅图2a,激光器16向宠物120投射激光点140,宠物120关注该激光点140。在识别激光点时,控制单元11启动激光器16向宠物120投射激光点140,摄像模组15拍摄第一帧图像,该第一帧图像包括宠物120与激光点140。紧接着,控制单元11关闭激光器16,激光器16停止向宠物120投射激光点140,摄像模组15拍摄第二帧图像,该第二帧图像包括宠物120,但不包括激光点140。 于是,控制单元11获取上述连续的两帧图像。
S22、电子设备基于采集的第一帧图像与采集的第二帧图像的像素差异信息,识别出激光点在第一帧图像或第二帧图像中的位置。
像素差异信息包括第一帧图像与第二帧图像同一坐标对应像素之间的像素差值,如前所述,上述两帧图像之间除了激光点从包含激光点逐变为不包含激光点,其它变化比较小或者无变化,因此,像素差异信息主要是两帧图像之间是否存在激光点图像引起的像素差异。
控制单元11根据图像分析算法,将上述一帧图像与第二帧图像比对处理,识别出激光点。进一步的,控制单元11通过计算出该激光点在上述第一帧图像中的像素坐标点,进而确定该激光点在第一帧图像中的位置。
如前所述,上述两帧图像之间除了激光点变化之外,其它变化比较小,逗宠设备100能够适应复杂成像环境,快速地、准确地从该第一帧图像中识别出激光点的位置。
在识别出激光点在第一帧图像中的位置时,控制单元11可以采用帧差方式以快速准确地识别出激光点的位置。在一些实施例中,请参阅图3,S22包括:
S221、电子设备基于采集的第一帧图像与采集的第二帧图像的像素差异信息,生成帧差图像;
S222、电子设备根据帧差图像,识别出激光点在第一帧图像或第二帧图像中的位置。
帧差图像为将第一帧图像与第二帧图像作像素帧差运算后的图像,帧差图像保留着激光点图像。由于两帧图像除了激光点的变化明显,其它物体变化并不明显,通过将第一帧图像与第二帧图像作像素帧差运算,其能够快速地滤除不符合激光点成像特征的干扰图像,并至少保留着激光点图像。
对于激光点所处环境的背景为单色背景时,例如背景为白色背景或黑色背景,控制单元11通过将第一帧图像与第二帧图像作像素帧差运算,从而滤除非激光点成像区域,使得帧差图像只剩下激光点成像区域,因此,控制单元11能够快速准确地识别出激光点在第一帧图像中的位置。
对于激光点所处环境的背景比较复杂时,例如,请参阅图3a与图3b,图3a为本公开实施例提供一种在复杂成像环境下包含激光点的第一帧图像,图3b为本公开实施例提供一种在复杂成像环境下未包含激光点的第二帧图像,控制 单元11通过将第一帧图像与第二帧图像作像素帧差运算,滤除大部分非激光点成像区域的干扰图像区域,从而为后续快速准确地识别出激光点在第一帧图像中的位置做好准备。
控制单元11获得帧差图像后,根据激光点图像条件,从帧差图像筛选出与激光点成像对应的图像区域,并将该图像区域作为激光点,进而识别出激光点在第一帧图像中的位置。
生成帧差图像的方式可以为多种图像处理方式。在一些实施例中,电子设备可以根据激光点的颜色特征,快速准确地生成帧差图像。S221包括:将第一帧图像与第二帧图像作像素帧差运算,生成帧差图,帧差图中每个帧差像素至少满足以下条件:
当第一帧图像与第二帧图像同一坐标的像素中目标颜色分量相减后的差值大于或等于预设帧差阈值时,帧差图像中与同一坐标对应的像素中的目标颜色分量被置为第一预设颜色值。
当第一帧图像与第二帧图像同一坐标的像素中目标颜色分量相减后的差值小于预设帧差阈值时,帧差图像中与同一坐标对应的像素中的目标颜色分量被置为第二预设颜色值。第一预设颜色值与第二预设颜色值不同。
第一预设颜色值与第二预设颜色值不同,并且,第一预设颜色值与第二预设颜色值可以由用户自定义,例如,第一预设颜色值为255,第二预设颜色值为0。
在本实施例中,激光点的颜色对应目标颜色分量。如前所述,激光点的基本特征为中心高亮而呈白色,激光点的颜色可以为多种多样,诸如红色、绿色、蓝色等等。激光点的颜色种类不同,目标颜色分量也随之不同,例如,激光点的颜色为红色时,目标颜色分量为红色分量,非目标颜色分量为蓝色分量与绿色分量;激光点的颜色为绿色时,目标颜色分量为绿色分量,非目标颜色分量为蓝色分量与红色分量;激光点的颜色为蓝色时,目标颜色分量为蓝色分量,非目标颜色分量为红色分量与绿色分量。
由于该第一帧图像与该第二帧图像之间除了激光点变化之外,其它变化比较小,并且,激光点具有明显的颜色特征,激光点在该第一帧图像中像素的目标颜色分量大于在该第二帧图像中像素的目标颜色分量。
控制单元11根据目标颜色分量,将第一帧图像与第二帧图像作像素帧差运 算后,快速有效地过滤掉一些干扰图像,并至少保留着激光点的图像。
目标颜色分量不同,生成帧差图亦不同。举例而言,激光点的颜色为红色,目标颜色分量为红色分量,非目标颜色分量为绿色分量与蓝色分量,控制单元11判断第一帧图像中每个像素的各个颜色分量与第二帧图像中每个像素的颜色分量之间的关系是否满足下述条件,以生成帧差图像,下述条件如下所述:
1、当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的红色分量为第一预设颜色值;
2、当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差小于预设帧差阈值时,帧差图像中像素的红色分量为第二预设颜色值。
3、当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且第一帧图像中每个像素的绿色分量减去第二帧图像中每个像素的绿色分量之差大于或等于预设帧差阈值时,帧差图像中像素的绿色分量为第三预设颜色值。
4、当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,第一帧图像中每个像素的绿色分量减去第二帧图像中每个像素的绿色分量之差小于预设帧差阈值时,帧差图像中像素的绿色分量为第四预设颜色值;第三预设颜色值与第四预设颜色值不同。
5、当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且第一帧图像中每个像素的蓝色分量减去第二帧图像中每个像素的蓝色分量之差大于或等于预设帧差阈值时,帧差图像中像素的蓝色分量为第五预设颜色值。
6、当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,第一帧图像中每个像素的蓝色分量减去第二帧图像中每个像素的蓝色分量之差小于预设帧差阈值时,帧差图像中像素的蓝色分量为第六预设颜色值,第五预设颜色值与第六预设颜色值不同。
第一预设颜色值至第六预设颜色值由用户自定义。
在一些实施例中,将上述6个条件以公式表达时,其可以为:
公式一:
Figure PCTCN2018093748-appb-000003
Figure PCTCN2018093748-appb-000004
Figure PCTCN2018093748-appb-000005
其中,R th为帧差阈值,F r(x,y)、F g(x,y)、F b(x,y)分别为该第一帧图像中位于x行y列像素的红色分量、绿色分量、蓝色分量,S r(x,y)、S g(x,y)、S b(x,y)分别为该第二帧图像中位于x行y列像素的红色分量、绿色分量、蓝色分量,D r(x,y)、D g(x,y)、D b(x,y)分别为该帧差图中位于x行y列像素的红色分量、绿色分量、蓝色分量。
此处,第一预设颜色值、第三预设颜色值及第五预设颜色值皆为255,第二预设颜色值、第四预设颜色值及第六预设颜色值皆为0。因此,控制单元11通过公式1得到帧差图各个像素值,并根据帧差图各个像素值生成帧差图。
帧差阈值R th用于当激光点在该第一帧图像中像素的目标颜色分量大于在该第二帧图像中像素的目标颜色分量,至少能够在帧差图中有效地滤除大部分干扰图像,帧差阈值R th为在该第一帧图像中像素的目标颜色分量减去在该第二帧图像中像素的目标颜色分量的最小有效差值,例如,在本实施例中,帧差阈值R th为上述两帧图像中对应像素的红色分量差值的最小有效值。帧差阈值R th的取值范围为[4,10],帧差阈值R th由用户根据产品需求自行选择,在本实施例中,帧差阈值R th取值5。
进一步的,即使上述两帧图像中对应像素的非目标分量差值大于帧差阈值R th,但是上述两帧图像中对应像素的目标颜色分量差值不大于帧差阈值R th,上述两帧图像中对应像素作完帧差运算后的帧差像素并不满足激光点图像有目标颜色分量的特征,因此,控制单元11将上述两帧图像中对应像素作完帧差运算后的帧差像素置为黑色,从而减小后面的计算量与干扰。例如,激光点颜色为红色,上述两帧图像中对应像素的绿色分量差值或蓝色分量差值大于帧差阈值 R th,但是上述两帧图像中对应像素的红色颜色分量差值不大于帧差阈值R th,控制单元11将上述两帧图像中对应像素作完帧差运算后的帧差像素置为黑色。
请参阅图4,图4是本公开实施例提供一种将图3a与图3b作像素帧差运算后的帧差图像。由图4可知,该帧差图像有效地滤除一些干扰图像,并至少保留着激光点的图像,值得说明的是,图4的原始图像为彩色图像,为了满足专利附图审查要求,故将原始图像转换成灰度图像,如图4所示。
在本实施例中,若图3a与图3b皆为单色背景时,该帧差图可以只保留着激光点的图像。
在一些实施例中,当目标颜色分量为非红色分量(为绿色分量或蓝色分量)时,上述公式适应性地改变。本领域技术人员应当明白,本文提供的激光点颜色为红色这一实施例并不用于限制本公开的保护范围,只是用于解释以及辅助本领域技术人员理解本公开实施例的技术方案。
控制单元11得到帧差图像后,可以通过选择图像连通域方式以识别出激光点在该第一帧图像中的位置。在一些实施例中,请参阅图4a,S222包括:
S2221、将帧差图像转换成二值化图像;
S2222、从二值化图像中搜索一个或多个图像连通域;
S2223、从一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点;
S2224、获取选择的一个或多个图像连通域在第一帧图像或第二帧图像中的坐标位置。
虽然帧差图像保留着激光点图像,但是,帧差图像中保留着各个图像(包括激光点图像)仍然为彩色图像。控制单元11为了能够快速识别出激光点在该第一帧图像中所处的位置,于是,控制单元11将帧差图像转换成二值化图像,例如,当帧差图像中像素的目标颜色分量为第一预设颜色值时,电子设备将像素的像素值置为第一预设像素值。当帧差图像中像素的目标颜色分量为第二预设颜色值时,电子设备将像素的像素值置为第二预设像素值,第一预设像素值与第二预设像素值不同,其中,第一预设像素值与第二预设像素值可以由用户自定义。
由上述公式一可知,帧差图每个像素的红色分量只有两个取值,分别为0或255。在一些实施例中,控制单元11将帧差图像转换成二值化图像时,可以 根据以下公式二:
Figure PCTCN2018093748-appb-000006
将帧差图像转换成二值化帧差图像;
其中,B(x,y)为二值化图像中位于x行y列像素点亮度值。
请参阅图4b,图4b是将图4作二值化处理后的二值化图像的示意图。由图4b可知,控制单元11将红色的激光点图像置为白色。
图像连通域(Connected Component)是指图像中具有相同像素值且位置相邻的前景像素点组成的图像区域。
控制单元11根据连通区域分析算法(Connected Component Analysis,Connected Component Labeling),从二值化图像中搜索图像连通域,其中,连通区域分析算法可以为Two-Pass(两遍扫描法)或Seed Filling(种子填充法)等等。
在搜索图像连通域时,每个图像连通域在帧差图像或二值化图像内被预设形状包络,如图4c所示,每个图像连通域被矩形包络。可以理解的是,预设形状不仅可以为矩形,而且还可以为其它合适的形状,诸如圆形、椭圆形或菱形等等。
对于单色背景的二值化图像,图像连通域的数量比较少,控制单元11可以以较少运算量处理单色背景的二值化图像,以识别出激光点在第一帧图像中的位置。
然而,对于复杂背景的二值化图像,如图4c所示,该二值化图像存在数量比较多的图像连通域,控制单元11需要花费大量运算以识别出激光点在第一帧图像中的位置。因此,为了减少运算量,提高识别速率,控制单元11根据包络图像连通域的预设形状,初步筛选出满足激光点的基础形状条件的图像连通域。举例而言:预设形状为矩形,首先,控制单元11判断包络图像连通域的矩形是否同时满足以下式子的要求:
D min≤CC W≤D max
D min≤CC H≤D max
A min≤CC A
其中,CC W为矩形的宽度,CC H为矩形的高度,CC A为图像连通域的面积。 D min为矩形的宽度或高度对应的预设最小值,取值范围为[1,10],本实施例取值2。D max为矩形的宽度或高度对应的预设最大值,取值范围为[40,70],本实施例取值60。A min为图像连通域对应的预设最小面积,取值范围为[10,60],本实施例取值20。
其次,若满足,保留图像连通域,保留方式可以为依然使用矩形包络图像连通域,如图4d所示。若未满足,丢弃图像连通域,丢弃方式可以为不再使用矩形包络图像连通域。
在一些实施例中,当具有多个激光器16投射多个激光点时,控制单元11从一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点,因此,其可以识别出多个激光点。
因此,通过初步筛选,控制单元11能够将明显不符合激光点的基础形状条件的图像连通域滤除掉,以减少后续图像分析量与运算量。
由于激光点的基本特征与成像环境下其它物体的特征有所区别,因此,激光点图像的综合特征与其它物体图像的综合特征是不同的,于是,搜索图像连通域时,控制单元11根据激光点成像时的基本特征,例如,激光点的形状、中心亮度或颜色分量等等,从若干个图像连通域筛选出最优的图像连通域。因此,在一些实施例中,请参阅图5,S2222包括:
S51、电子设备计算出每个图像连通域的权重;
S52、电子设备比较任意两个所述图像连通域的各自权重;
S53、电子设备根据比较结果,从一个或多个图像连通域中遍历出权重最高的图像连通域,并选择权重最高的图像连通域作为激光点。
图像连通域的类型包括正在处理的连通域与当前候选连通域,正在处理的连通域与当前候选连通域皆包括若干特征值,每个特征值皆对应一个权重。控制单元11通过统计出每个图像连通域中各个特征值的权重,以获得每个图像连通域的权重。因此,在一些实施例中,计算每个图像连通域的权重时,请参阅图5a,S51包括:
S511、电子设备计算出正在处理的图像连通域或当前候选图像连通域中每个特征值的权重;
S512、电子设备联合相加若干特征值的权重,得到正在处理的图像连通域 或当前候选图像连通域的权重。
每个图像连通域可以采用一种或两种或多种特征值从各个维度描述图像连通域的特征。下文从6个维度描述每个图像连通域的特征,不过,本领域技术人员可以理解的是:出于识别激光点在该第一帧图像中的位置的目的,本领域技术人员可以选择一个特征值表达图像连通域的特征,亦可以选择两个特征值综合表达图像连通域的特征,更可以选择多个特征值综合并全方位地表达图像连通域的特征,只要能够识别出激光点在该第一帧图像中的位置,本领域技术人员可以选择任意数量的特征值,或者,可以组合任意特征值以识别出激光点在该第一帧图像中的位置。例如,本领域技术人员可以只选择中心亮度差异值
Figure PCTCN2018093748-appb-000007
表达图像连通域的特征,亦可以选择中心亮度差异值
Figure PCTCN2018093748-appb-000008
与目标颜色像素占比率R 1,更可以选择本实施例提供的6个特征值表达图像连通域的特征。本领域技术人员在本实施例所训导内容的基础上,作出任何替换或改变方式,其应当落入本公开的保护范围之内。
对于一个图像连通域,其可以包括如下特征值:
1、中心亮度差异值
Figure PCTCN2018093748-appb-000009
中心亮度差异值
Figure PCTCN2018093748-appb-000010
为图像连通域中心在该第一帧图像中对应像素点的亮度与在第二帧图像中对应像素点的亮度之差的绝对值。
图像连通域中心为图像连通域内亮度最大的像素点,对于激光点图像,其对应的图像连通域的中心亮度亦是最大的。对于激光点图像,其在该第一帧图像中对应像素点的亮度与在第二帧图像中对应像素点的亮度之间的差值比较大,比较突出。对于其它静态物体在该第一帧图像中对应像素点的亮度与在第二帧图像中对应像素点的亮度之间的差值比较小,因此,通过中心亮度差值的比较方式,其能够提高筛选出激光点对应的图像连通域的概率。
2、目标颜色像素占比率R 1。目标颜色像素占比率R 1为该第一帧图像中与图像连通域对应的区域内,目标颜色像素占图像连通域中所有像素的比率。举例而言,当激光点颜色为红色时,目标颜色像素为红色像素,其中,该红色像素定义为:HSV值位于(140,40,200)与(179,255,255)之间的像素),于是,控制单元11便可以计算出红色像素占比率R 1
对于除了激光点图像变化之外,其它物体几乎不变化的两帧图像,一般而言,非激光点图像对应的图像连通域中的目标颜色像素占比率R 1也变化不大, 该第一帧图像中的非激光点图像对应的图像连通域中的目标颜色像素占比率R 1至少不大于该第二帧图像中的非激光点图像对应的图像连通域中的目标颜色像素占比率R 1,但是,该第一帧图像中的激光点图像对应的图像连通域中的目标颜色像素占比率R 1大于在该第二帧图像中与激光点图像对应的图像连通域中的目标颜色像素占比率R 1。因此,通过此种方式,其能够提高筛选出激光点对应的图像连通域的概率。
3、白色像素占比率R w。该第一帧图像中与图像连通域对应的区域内,白色像素占所述连通域中所有像素的比率。
在一些实施例中,白色像素定义为:灰度值大于W th的像素,W th的取值范围为[200,250],本实施例取值210。
4、矩形框的宽高比率R wh。矩形框在二值化帧差图像内包络每个图像连通域。其中,矩形框的宽高比率R wh的表达式为:
Figure PCTCN2018093748-appb-000011
其中,CC W为矩形的宽度,CC H为矩形的高度。
在上述表达式中,当CC W小于或等于CC H时,控制单元11选择R wh为CC w/CC H。当CC W大于CC H时,控制单元11选择R wh为CC H/CC W
5、目标颜色分量占比率R 2。目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内除去白色之外,并以目标颜色分量为基准的各个颜色像素占图像连通域中所有像素的比率。举例而言,在激光点的颜色为红色的情况下,目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内红色、粉色、黄色像素占图像连通域中所有像素的比率R rpy
6、连通域中心像素亮度B c。连通域中心像素亮度B c为图像连通域中心在该第一帧图像中对应像素点的亮度。
为了提高确认出激光点对应的图像连通域的概率,本文根据各个特征值的重要性为其赋予不同的权重,例如,赋予一个对应整数的比特位。特征值越重要,其权重越大,亦即,其对应的比特位越高,例如,0x200对应的特征值比0x100对应的特征值重要。最后,再将全部特征值的权重相加,组成该图像连通域的权重,并选择权重大的图像连通域作为候选连通域。
至此,下文接着介绍计算正在处理的连通域的权重和当前候选连通域的权重。
为了方便描述,定义如下标记:
cur:代表“正在处理的图像连通域”;
can:代表“当前候选连通域”。
1)对于“正在处理的图像连通域”:
1.1、中心亮度差异值
Figure PCTCN2018093748-appb-000012
的权重表达式为:
Figure PCTCN2018093748-appb-000013
其中,
Figure PCTCN2018093748-appb-000014
为正在处理的图像连通域的中心亮度差异值
Figure PCTCN2018093748-appb-000015
的权重,
Figure PCTCN2018093748-appb-000016
为正在处理的图像连通域的中心亮度差异值
Figure PCTCN2018093748-appb-000017
在上述表达式中,当
Figure PCTCN2018093748-appb-000018
时,控制单元11选择
Figure PCTCN2018093748-appb-000019
为“0x200”。当
Figure PCTCN2018093748-appb-000020
时,控制单元11选择
Figure PCTCN2018093748-appb-000021
为“0”。
在本实施例中,“0x200”由用户根据产品需求自行定义,“2”为经验值并可以由用户自行定义。
1.2、目标颜色像素占比率R 1的权重表达式为:
Figure PCTCN2018093748-appb-000022
其中,W(R 1(cur))为正在处理的图像连通域的目标颜色像素占比率R 1的权重,R 1(cur)为正在处理的图像连通域的目标颜色像素占比率R 1,R 1(can)为当前候选图像连通域的目标颜色像素占比率R 1
在上述表达式中,当R 1(cur)>R 1(can)时,控制单元11选择W(R 1(cur))为“0x100”。当R 1(cur)≤R 1(can)时,控制单元11选择W(R 1(cur))为“0”。例如,对于激光点颜色为红色时,目标颜色像素为红色像素,其中,该红色像素定义为:HSV值位于(140,40,200)与(179,255,255)之间的像素),于是,控制单元11便可以计算出红色像素占比率R 1
在本实施例中,“0x100”由用户根据产品需求自行定义。
1.3、白色像素占所述连通域中所有像素的比率;
Figure PCTCN2018093748-appb-000023
其中,W(R w(cur))为正在处理的图像连通域的白色像素占比率R w的权重,R w(cur)为正在处理的图像连通域的白色像素占比率R w,R w(can)为当前候选图像连通域的白色像素占比率R w
在上述表达式中,当R w(cur)>R w(can)时,控制单元11选择W(R w(cur))为“0x40”。当R w(cur)≤R w(can)时,控制单元11选择W(R w(cur))为“0”。
1.4、联合相加特征值R wh、R 2及B c的权重的表达式:
Figure PCTCN2018093748-appb-000024
W(Join(cur))为联合相加特征值R wh、R 2及B c的权重,R wh(cur)为正在处理的图像连通域的矩形框的宽高比率,R wh(can)为当前候选图像连通域的矩形框的宽高比率,R 2(cur)为正在处理的图像连通域的目标颜色分量占比率,B c(cur)为正在处理的图像连通域的连通域中心像素亮度,B c(can)为当前候选图像连通域的连通域中心像素亮度。
在上述表达式中,
当R wh(cur)>R wh(can)且R 2(cur)>R 2(can)且B c(cur)>B c(can),控制单元11选择W(Join(cur))为“0x80”。
当R wh(cur)≤R wh(can)或R 2(cur)≤R 2(can)或B c(cur)≤B c(can),控制单元11选择W(Join(cur))为“0”。
在本实施例中,“0x80”由用户根据产品需求自行定义。
综上,控制单元11在计算出正在处理的图像连通域中每个特征值的权重时,根据上述各个表达式,判断与选择对应的权重。
当控制单元11得到每个特征值的权重后,联合相加各个特征值的权重,得到正在处理的连通域的权重。例如,控制单元11根据以下表达式:
Figure PCTCN2018093748-appb-000025
得到正在处理的图像连通域的权重。
2)对于“当前候选图像连通域”:
2.1、中心亮度差异值
Figure PCTCN2018093748-appb-000026
的权重表达式为:
Figure PCTCN2018093748-appb-000027
其中,
Figure PCTCN2018093748-appb-000028
为当前候选连通域的中心亮度差异值
Figure PCTCN2018093748-appb-000029
的权重,
Figure PCTCN2018093748-appb-000030
为当前候选连通域的中心亮度差异值
Figure PCTCN2018093748-appb-000031
在上述表达式中,当
Figure PCTCN2018093748-appb-000032
时,控制单元11选择
Figure PCTCN2018093748-appb-000033
为“0x200”。当
Figure PCTCN2018093748-appb-000034
时,控制单元11选择
Figure PCTCN2018093748-appb-000035
为“0”。
2.2、目标颜色像素占比率R 1的权重表达式为:
Figure PCTCN2018093748-appb-000036
其中,W(R 1(can))为当前候选连通域的目标颜色像素占比率R 1的权重。
在上述表达式中,当R 1(cur)<R 1(can)时,控制单元11选择W(R 1(can))为“0x100”。当R 1(cur)≥R 1(can)时,控制单元11选择W(R 1(can))为“0”。
2.3、白色像素占连通域中所有像素的比率;
Figure PCTCN2018093748-appb-000037
其中,W(R w(can))为当前候选连通域的白色像素占比率R w的权重。
在上述表达式中,当R w(cur)<R w(can)时,控制单元11选择W(R w(can))为“0x40”。当R w(cur)≥R w(can)时,控制单元11选择W(R w(can))为“0”。
2.4、联合相加特征值R wh、R 2及B c的权重的表达式:
Figure PCTCN2018093748-appb-000038
W(Join(can))为联合相加特征值R wh、R 2及B c的权重。
在上述表达式中,
当R wh(cur)<R wh(can)且R 2(cur)<R 2(can)且B c(cur)<B c(can),控制单元11选择 W(Join(can))为“0x80”。
当R wh(cur)≥R wh(can)或R 2(cur)≥R 2(can)或B c(cur)≥B c(can),控制单元11选择W(Join(can))为“0”。
综上,控制单元11在计算出当前候选连通域中每个特征值的权重时,根据上述各个表达式,判断与选择对应的权重。
当控制单元11得到每个特征值的权重后,联合相加各个特征值的权重,得到当前候选连通域的权重。例如,控制单元11根据以下表达式:
Figure PCTCN2018093748-appb-000039
得到当前候选连通域的权重。
控制单元11得到对应图像连通域(正在处理的连通域或当前候选连通域)的权重后,在一些实施例中,控制单元11可以采用淘汰机制,遍历所有满足基本要求的图像连通域,将正在处理的连通域与当前的候选连通域比较,根据比较结果,从若干个图像连通域中遍历出权重最高的图像连通域。因此,在一些实施例中,请参阅图5b,S53包括:
S531、电子设备判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;
S532、若大于,电子设备将正在处理的图像连通域替换成当前候选图像连通域,并作为新的当前候选图像连通域;
S533、若小于,电子设备丢弃正在处理的图像连通域,保留当前图像候选连通域;
S534、电子设备判断是否存在下一个正在处理的图像连通域;
S535、若存在,电子设备将下一个正在处理的图像连通域作为所述正在处理的图像连通域,并返回继续判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;
S536、若未存在,电子设备判断是否存在当前候选图像连通域;
S537、若存在,电子设备将所述当前候选图像连通域作为最优的图像连通域;
S538、若未存在,电子设备判断出未存在最优的图像连通域。
在本实施例中,初始化时,由于没有当前候选图像连通域,所以其权重W(can) 置为0。若正在处理的图像连通域的权重大于当前候选图像连通域的权重,则将正在处理的图像连通域设为当前候选连通域。若正在处理的图像连通域的权重小于当前候选图像连通域的权重,丢弃该正在处理的图像连通域,继续取出下一个正在处理的图像连通域进行比较。
当控制单元11在该第一帧图像中识别出激光点的位置后,控制单元11还可以标记该激光点。请参阅图5c,控制单元11使用一定颜色的圆圈圈住激光点。
可以理解的是,“标记”是为了突出激光点,然而,突出激光点的方式可以多种多样,例如,渲染激光点图像,扩大激光点图像等等。
可以理解的是,在上述各个实施例中,激光点识别方法可以作为存储在逗宠设备的非临时性存储介质中的一组指令集来实现,亦可以作为存储在用户终端的非临时性存储介质中的一组指令集来实现,更可以作为存储在云端服务器或本地服务器的非临时性存储介质中的一组指令集来实现。因此,激光点识别方法不仅能够应用于逗宠设备,而且还能够应用于用户终端或云端服务器或本地服务器。
因此,当上述各个实施例的激光点识别方法应用于逗宠设备时,其可以将标记好的图像传输至用户终端。当上述各个实施例的激光点识别方法应用于用户终端时,用户终端可以本地在图像中标记激光点。当上述各个实施例的激光点识别方法应用于服务器时,其可以将标记好的图像传输至用户终端。
在上述各个实施例中,当用户需要调整激光点在家里投射的位置时,用户在用户终端的用户界面内单击某个区域,于是,用户终端向逗宠设备发送控制指令,逗宠设备的控制单元接收控制指令,并根据控制指令,控制激光点投射至目标位置,其中,该目标位置与该某个区域相对应。例如,用户需要调整激光点重新投射在家里的书桌上,于是,用户在用户界面下的图像中单击书桌,于是,逗宠设备的激光器向书桌投射激光点。
需要说明的是,在上述各个实施例中,上述各步骤之间并不必然存在一定的先后顺序,本领域普通技术人员,根据本公开实施例的描述可以理解,不同实施例中,上述各步骤可以有不同的执行顺序,亦即,可以并行执行,亦可以交换执行等等。
作为本公开实施例的另一方面,本公开实施例提供一种电子设备识别激光器发射的激光点装置。本公开实施例的电子设备识别激光器发射的激光点装置 可以作为其中一个软件功能单元,电子设备识别激光器发射的激光点装置包括若干指令,该若干指令存储于存储器内,处理器可以访问该存储器,调用指令进行执行,以完成上述电子设备识别激光器发射的激光点方法。
请参阅图6,电子设备识别激光器发射的激光点装置600包括:采集模块61与识别模块62。
采集模块61用于在激光器发射激光和暂停发射激光的两种状态下,采集第一帧图像和第二帧图像,其中,第一帧图像和第二帧图像中一帧图像包含激光器发射的激光点,另一帧不包含激光点;
识别模块62用于基于采集的第一帧图像与采集的第二帧图像的像素差异信息,识别出激光点在第一帧图像或第二帧图像中的位置。
综上,其能够适应复杂成像环境,快速地、准确地从一帧图像或第二帧图像中识别出激光点的位置。
在一些实施例中,请参阅图6a,识别模块62包括:生成单元621与识别单元622。
生成单元621用于基于采集的第一帧图像与采集的第二帧图像的像素差异信息,生成帧差图像。
识别单元622用于根据帧差图像,识别出激光点在第一帧图像或第二帧图像中的位置。
在一些实施例中,激光点的颜色对应目标颜色分量,第一帧图像包含所述激光点,第二帧图像不包含激光点。生成单元621具体用于:将第一帧图像与第二帧图像作像素帧差运算,生成帧差图,帧差图中每个帧差像素至少满足以下条件:
当第一帧图像与第二帧图像同一坐标的像素中目标颜色分量相减后的差值大于或等于预设帧差阈值时,帧差图像中与同一坐标对应的像素中的目标颜色分量被置为第一预设颜色值;
当第一帧图像与第二帧图像同一坐标的像素中目标颜色分量相减后的差值小于预设帧差阈值时,帧差图像中与同一坐标对应的像素中的目标颜色分量被置为第二预设颜色值,第一预设颜色值与第二预设颜色值不同。
激光点的颜色为红色,目标颜色分量为红色分量,非目标颜色分量为绿色分量与蓝色分量;
当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值时,帧差图像中像素的红色分量为第一预设颜色值;
当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差小于预设帧差阈值时,帧差图像中像素的红色分量为第二预设颜色值;
当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且第一帧图像中每个像素的绿色分量减去第二帧图像中每个像素的绿色分量之差大于或等于预设帧差阈值时,帧差图像中像素的绿色分量为第三预设颜色值;
当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,第一帧图像中每个像素的绿色分量减去第二帧图像中每个像素的绿色分量之差小于预设帧差阈值时,帧差图像中像素的绿色分量为第四预设颜色值;第三预设颜色值与第四预设颜色值不同;
当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且第一帧图像中每个像素的蓝色分量减去第二帧图像中每个像素的蓝色分量之差大于或等于预设帧差阈值时,帧差图像中像素的蓝色分量为第五预设颜色值;
当满足以下条件:第一帧图像中每个像素的红色分量减去第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,第一帧图像中每个像素的蓝色分量减去第二帧图像中每个像素的蓝色分量之差小于预设帧差阈值时,帧差图像中像素的蓝色分量为第六预设颜色值,第五预设颜色值与第六预设颜色值不同。
在一些实施例中,识别单元622具体用于:将帧差图像转换成二值化图像;从二值化图像中搜索一个或多个图像连通域;电子设备从一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点;获取选择的一个或多个图像连通域在第一帧图像或第二帧图像中的坐标位置。
在一些实施例中,识别单元622还具体用于:当帧差图像中像素的目标颜色分量为第一预设颜色值时,将像素的像素值置为第一预设像素值;当帧差图 像中像素的目标颜色分量为第二预设颜色值时,将像素的像素值置为第二预设像素值,第一预设像素值与第二预设像素值不同。
在一些实施例中,识别单元622还具体用于:计算出每个图像连通域的权重;比较任意两个图像连通域的各自权重;根据比较结果,从一个或多个图像连通域中遍历出权重最高的图像连通域,并选择权重最高的图像连通域作为激光点。
在一些实施例中,图像连通域的类型包括正在处理的图像连通域与当前候选图像连通域,正在处理的图像连通域与当前候选图像连通域均包括若干特征值,每个特征值均对应权重;则识别单元622还具体用于:计算出正在处理的图像连通域或当前候选图像连通域中每个特征值的权重;联合相加若干特征值的权重,得到正在处理的图像连通域或当前候选图像连通域的权重。
在一些实施例中,识别单元622还具体用于:判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;若大于,将正在处理的图像连通域替换成当前候选图像连通域,并作为新的当前候选图像连通域;若小于,丢弃正在处理的图像连通域,保留当前候选图像连通域;判断是否存在下一个正在处理的图像连通域;若存在,将下一个正在处理的图像连通域作为正在处理的图像连通域,并返回继续判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;若未存在,判断是否存在当前候选图像连通域;若存在,将当前候选图像连通域作为最优的图像连通域;若未存在,判断出未存在最优的图像连通域。
在一些实施例中,特征值的类型包括以下任意一种或多种:中心亮度差异值
Figure PCTCN2018093748-appb-000040
目标颜色像素占比率R 1、白色像素占比率R w、矩形框的宽高比率R wh、目标颜色分量占比率R 2及连通域中心像素亮度B c
中心亮度差异值
Figure PCTCN2018093748-appb-000041
为图像连通域中心在第一帧图像中对应像素点的亮度与在第二帧图像中对应像素点的亮度之差的绝对值。
目标颜色像素占比率R 1为第一帧图像中与图像连通域对应的区域内,目标颜色像素占图像连通域中所有像素的比率;
第一帧图像中与图像连通域对应的区域内,白色像素占连通域中所有像素的比率;
矩形框在二值化帧差图像内包络每个图像连通域;
目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内除去白色之外,并以目标颜色分量为基准的各个颜色像素占图像连通域中所有像素的比率;
连通域中心像素亮度B c为图像连通域中心在所述第一帧图像中对应像素点的亮度。
在激光点的颜色为红色的情况下,目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内红色、粉色、黄色像素占图像连通域中所有像素的比率。
在一些实施例中,每个图像连通域在帧差图像或二值化图像内被预设形状包络。识别单元622还具体用于:用于根据包络图像连通域的预设形状,初步筛选出满足激光点的基础形状条件的图像连通域。
在一些实施例中,预设形状包括矩形;识别单元622还具体用于:
判断包络图像连通域的预设形状是否同时满足以下式子的要求:
D min≤CC W≤D max
D min≤CC H≤D max
A min≤CC A
若满足,保留所述图像连通域;
若未满足,丢弃所述图像连通域;
其中,CC W为矩形的宽度,CC H为矩形的高度,CC A为图像连通域的面积,D min为矩形的宽度或高度对应的预设最小值,D max为矩形的宽度或高度对应的预设最大值,A min为图像连通域对应的预设最小面积。
在一些实施例中,请参阅图6b,识别模块62还包括滤波单元623,滤波单元623用于采用低通均值滤波方式分别处理第一帧图像与第二帧图像。
在一些实施例中,请参阅图6c,激光点识别装置600还包括标记模块63。标记模块63用于标记激光点。
在一些实施例中,请参阅图6d,激光点识别装置600还包括:获取模块64与控制模块65。
获取模块64用于获取控制指令;
控制模块65用于根据控制指令,控制激光点投射至目标位置。
需要说明的是,上述激光点识别装置可执行本公开实施例所提供的电子设 备识别激光器发射的激光点方法,具备执行方法相应的功能模块和有益效果。未在电子设备识别激光器发射的激光点装置实施例中详尽描述的技术细节,可参见本公开实施例所提供的电子设备识别激光器发射的激光点方法。
作为本公开实施例的另一方面,本公开实施例提供一种电子设备。请参阅图7,该电子设备700包括:一个或多个处理器71以及存储器72。其中,图7中以一个处理器71为例。
处理器71和存储器72可以通过总线或者其他方式连接,图7中以通过总线连接为例。
存储器72作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本公开实施例中的电子设备识别激光器发射的激光点方法对应的程序指令/模块。处理器71通过运行存储在存储器72中的非易失性软件程序、指令以及模块,从而执行上述各个实施例的电子设备识别激光器发射的激光点方法,或者上述各个实施例的电子设备识别激光器发射的激光点装置的各种功能应用以及数据处理。
存储器72可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器72可选包括相对于处理器71远程设置的存储器,这些远程存储器可以通过网络连接至处理器71。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
所述程序指令/模块存储在所述存储器72中,当被所述一个或者多个处理器71执行时,执行上述任意方法实施例中的电子设备识别激光器发射的激光点方法,例如,从而执行上述各个实施例的电子设备识别激光器发射的激光点方法,或者上述各个实施例的电子设备识别激光器发射的激光点装置的各种功能应用以及数据处理。
本公开实施例还提供了一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使电子设备执行如上任一项所述的电子设备识别激光器发射的激光点方法。
因此,其能够适应复杂成像环境,快速地、准确地从该第一帧图像或第二帧图像中识别出激光点的位置。
本公开实施例提供了一种计算机程序产品,所述计算机程序产品包括存储 在非易失性计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被电子设备执行时,使所述电子设备执行任一项所述的电子设备识别激光器发射的激光点方法。
因此,其能够适应复杂成像环境,快速地、准确地从第一帧图像或第二帧图像中识别出激光点的位置。
以上所描述的装置或设备实施例仅仅是示意性的,其中所述作为分离部件说明的单元模块可以是或者也可以不是物理上分开的,作为模块单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络模块单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对相关技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上实施例仅用以说明本公开的技术方案,而非对其限制;在本公开的思路下,以上实施例或者不同实施例中的技术特征之间也可以进行组合,步骤可以以任意顺序实现,并存在如上所述的本公开的不同方面的许多其它变化,为了简明,它们没有在细节中提供;尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (20)

  1. 一种电子设备,其特征在于,包括:
    摄像模组;
    激光器;
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够用于执行:
    在激光器发射激光和暂停发射激光的两种状态下,控制所述摄像模组采集第一帧图像和第二帧图像,其中,所述第一帧图像和第二帧图像中一帧图像包含所述激光器发射的激光点,另一帧不包含所述激光点;
    基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置。
  2. 根据权利要求1所述的电子设备,其特征在于,所述摄像模组包括摄像头与驱动组件,所述驱动组件与所述摄像头连接,所述驱动组件用于驱动摄像头移动。
  3. 根据权利要求2所述的电子设备,其特征在于,所述驱动组件包括线性电机、旋转电机、滑轨及支架,所述线性电机的输出端与所述旋转电机的固定端连接,所述支架固定安装于所述旋转电机的输出端,所述摄像头与所述激光器皆安装于所述支架上,所述摄像头收容于所述滑轨内并可沿着所述滑轨自由移动。
  4. 根据权利要求3所述的电子设备,其特征在于,所述摄像模组包括补光组件,所述补光组件用于在所述摄像头拍摄图像时,为所述摄像头补光。
  5. 根据权利要求2所述的电子设备,其特征在于,所述摄像头包括一个或多个光学传感器与镜头,所述一个或多个光学传感器设置于镜头的成像面。
  6. 根据权利要求1所述的电子设备,其特征在于,所述基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
    基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,生成帧差图像;
    根据所述帧差图像,识别出所述激光点在所述第一帧图像或第二帧图像中的位置。
  7. 根据权利要求6所述的电子设备,其特征在于,所述根据所述帧差图像,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
    将所述帧差图像转换成二值化图像;
    从所述二值化图像中搜索一个或多个图像连通域;
    从所述一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点;
    获取选择的一个或多个图像连通域在所述第一帧图像或第二帧图像中的坐标位置。
  8. 一种电子设备识别激光器发射的激光点方法,其特征在于,包括:
    在激光器发射激光和暂停发射激光的两种状态下,所述电子设备采集第一帧图像和第二帧图像,其中,所述第一帧图像和第二帧图像中一帧图像包含所述激光器发射的激光点,另一帧不包含所述激光点;
    所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置。
  9. 根据权利要求8所述的方法,其特征在于,所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
    所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,生成帧差图像;
    所述电子设备根据所述帧差图像,识别出所述激光点在所述第一帧图像或第二帧图像中的位置。
  10. 根据权利要求9所述的方法,其特征在于,所述激光点的颜色对应目标颜色分量,所述第一帧图像包含所述激光点,所述第二帧图像不包含所述激光点;
    所述电子设备基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,生成帧差图像,包括:
    将所述第一帧图像与所述第二帧图像作像素帧差运算,生成帧差图,所述帧差图中每个帧差像素至少满足以下条件:
    当所述第一帧图像与所述第二帧图像同一坐标的像素中目标颜色分量相减后的差值大于或等于预设帧差阈值时,所述帧差图像中与所述同一坐标对应的像素中的目标颜色分量被置为第一预设颜色值;
    当所述第一帧图像与所述第二帧图像同一坐标的像素中目标颜色分量相减后的差值小于预设帧差阈值时,所述帧差图像中与所述同一坐标对应的像素中的目标颜色分量被置为第二预设颜色值,所述第一预设颜色值与所述第二预设颜色值不同。
  11. 根据权利要求10所述的方法,其特征在于,所述激光点的颜色为红色,目标颜色分量为红色分量,非目标颜色分量为绿色分量与蓝色分量;
    当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的红色分量为第一预设颜色值;
    当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差小于预设帧差阈值时,所述帧差图像中像素的红色分量为第二预设颜色值;
    当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且所述第一帧图像中每个像素的绿色分量减去所述第二帧图像中每个像素的绿色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的绿色分量为第三预设颜色值;
    当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,所述第一帧图像中每个像素的绿色分量减去所述第二帧图像中每个像素的绿色分量之差小于预设帧差阈值时,所述帧差图像中像素的绿色分量为第四预设颜色值;所述第三预设颜色值与所述第四预设颜色值不同;
    当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差大于或等于预设帧差阈值,且所述第一帧图像中每个像素的蓝色分量减去所述第二帧图像中每个像素的蓝色分量之差大于或等于预设帧差阈值时,所述帧差图像中像素的蓝色分量为第五预设颜色值;
    当满足以下条件:所述第一帧图像中每个像素的红色分量减去所述第二帧图像中每个像素的红色分量之差小于预设帧差阈值,或者,所述第一帧图像中每个像素的蓝色分量减去所述第二帧图像中每个像素的蓝色分量之差小于预设帧差阈值时,所述帧差图像中像素的蓝色分量为第六预设颜色值,所述第五预设颜色值与所述第六预设颜色值不同。
  12. 根据权利要求10所述的方法,其特征在于,所述电子设备根据所述帧差图像,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置,包括:
    所述电子设备将所述帧差图像转换成二值化图像;
    所述电子设备从所述二值化图像中搜索一个或多个图像连通域;
    所述电子设备从所述一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点;
    所述电子设备获取选择的一个或多个图像连通域在所述第一帧图像或第二帧图像中的坐标位置。
  13. 根据权利要求12所述的方法,其特征在于,所述电子设备将所述帧差图像转换成二值化图像,包括:
    当所述帧差图像中像素的目标颜色分量为所述第一预设颜色值时,所述电子设备将所述像素的像素值置为第一预设像素值;
    当所述帧差图像中像素的目标颜色分量为所述第二预设颜色值时,所述电子设备将所述像素的像素值置为第二预设像素值,所述第一预设像素值与所述第二预设像素值不同。
  14. 根据权利要求12所述的方法,其特征在于,所述电子设备从所述一个或多个图像连通域选择满足激光点图像条件的一个或多个图像连通域作为激光点,包括:
    所述电子设备计算出每个所述图像连通域的权重;
    所述电子设备比较任意两个所述图像连通域的各自权重;
    所述电子设备根据比较结果,从所述一个或多个图像连通域中遍历出权重最高的图像连通域,并选择所述权重最高的图像连通域作为激光点。
  15. 根据权利要求14所述的方法,其特征在于,所述图像连通域的类型包括正在处理的图像连通域与当前候选图像连通域,所述正在处理的图像连通域 与所述当前候选图像连通域均包括若干特征值,每个所述特征值均对应权重;
    则所述电子设备计算出每个所述图像连通域的权重,包括:
    所述电子设备计算出所述正在处理的图像连通域或所述当前候选图像连通域中每个所述特征值的权重;
    所述电子设备联合相加若干所述特征值的权重,得到所述正在处理的图像连通域或所述当前候选图像连通域的权重。
  16. 根据权利要求15所述的方法,其特征在于,所述电子设备根据比较结果,从所述一个或多个图像连通域中遍历出权重最高的图像连通域,包括:
    所述电子设备判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;
    若大于,所述电子设备将所述正在处理的图像连通域替换成当前候选图像连通域,并作为新的当前候选图像连通域;
    若小于,所述电子设备丢弃所述正在处理的图像连通域,保留所述当前候选图像连通域;
    所述电子设备判断是否存在下一个正在处理的图像连通域;
    若存在,所述电子设备将下一个正在处理的图像连通域作为所述正在处理的图像连通域,并返回继续判断正在处理的图像连通域的权重是否大于当前候选图像连通域的权重;
    若未存在,所述电子设备判断是否存在当前候选图像连通域;
    若存在,所述电子设备将所述当前候选图像连通域作为最优的图像连通域;
    若未存在,所述电子设备判断出未存在最优的图像连通域。
  17. 根据权利要求15所述的方法,其特征在于,所述特征值的类型包括以下任意一种或多种:
    中心亮度差异值
    Figure PCTCN2018093748-appb-100001
    所述中心亮度差异值
    Figure PCTCN2018093748-appb-100002
    为图像连通域中心在所述第一帧图像中对应像素点的亮度与在所述第二帧图像中对应像素点的亮度之差的绝对值;
    目标颜色像素占比率R 1,所述目标颜色像素占比率R 1为所述第一帧图像中与图像连通域对应的区域内,目标颜色像素占所述图像连通域中所有像素的比率;
    白色像素占比率R w,所述第一帧图像中与图像连通域对应的区域内,白色 像素占所述连通域中所有像素的比率;
    矩形框的宽高比率R wh,所述矩形框在所述二值化帧差图像内包络每个所述图像连通域;
    目标颜色分量占比率R 2,目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内除去白色之外,并以目标颜色分量为基准的各个颜色像素占图像连通域中所有像素的比率;
    连通域中心像素亮度B c,连通域中心像素亮度B c为图像连通域中心在所述第一帧图像中对应像素点的亮度。
  18. 根据权利要求17所述的方法,其特征在于,在所述激光点的颜色为红色的情况下,目标颜色分量占比率R 2为帧差图与图像连通域对应的区域内红色、粉色、黄色像素占图像连通域中所有像素的比率。
  19. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述电子设备标记所述激光点。
  20. 一种非易失性计算机可读存储介质,其特征在于,所述非易失性计算机可读存储介质存储有计算机可执行指令,所述计算机可执行指令用于使电子设备执行:
    在激光器发射激光和暂停发射激光的两种状态下,采集第一帧图像和第二帧图像,其中,所述第一帧图像和第二帧图像中其中一帧图像包含所述激光器发射的激光点,另一帧不包含所述激光点;
    基于所述采集的第一帧图像与所述采集的第二帧图像的像素差异信息,识别出所述激光点在所述第一帧图像或所述第二帧图像中的位置。
PCT/CN2018/093748 2018-06-29 2018-06-29 电子设备识别激光器发射的激光点方法及电子设备 WO2020000397A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/093748 WO2020000397A1 (zh) 2018-06-29 2018-06-29 电子设备识别激光器发射的激光点方法及电子设备

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/093748 WO2020000397A1 (zh) 2018-06-29 2018-06-29 电子设备识别激光器发射的激光点方法及电子设备

Publications (1)

Publication Number Publication Date
WO2020000397A1 true WO2020000397A1 (zh) 2020-01-02

Family

ID=68985459

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/093748 WO2020000397A1 (zh) 2018-06-29 2018-06-29 电子设备识别激光器发射的激光点方法及电子设备

Country Status (1)

Country Link
WO (1) WO2020000397A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6370262B1 (en) * 1994-11-08 2002-04-09 Canon Kabushiki Kaisha Information processing apparatus and remote apparatus for object, using distance measuring apparatus
CN101793966A (zh) * 2009-02-03 2010-08-04 夏普株式会社 光点位置检测装置和光部件、电子设备
CN103677446A (zh) * 2013-11-14 2014-03-26 乐视致新电子科技(天津)有限公司 显示设备、摄像式触控方法及装置
CN105918151A (zh) * 2016-07-18 2016-09-07 周佰芹 一种宠物陪护机器人
CN106152937A (zh) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 空间定位装置、系统及其方法
CN106172059A (zh) * 2016-08-31 2016-12-07 长沙长泰机器人有限公司 宠物喂养机器人

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6370262B1 (en) * 1994-11-08 2002-04-09 Canon Kabushiki Kaisha Information processing apparatus and remote apparatus for object, using distance measuring apparatus
CN101793966A (zh) * 2009-02-03 2010-08-04 夏普株式会社 光点位置检测装置和光部件、电子设备
CN103677446A (zh) * 2013-11-14 2014-03-26 乐视致新电子科技(天津)有限公司 显示设备、摄像式触控方法及装置
CN106152937A (zh) * 2015-03-31 2016-11-23 深圳超多维光电子有限公司 空间定位装置、系统及其方法
CN105918151A (zh) * 2016-07-18 2016-09-07 周佰芹 一种宠物陪护机器人
CN106172059A (zh) * 2016-08-31 2016-12-07 长沙长泰机器人有限公司 宠物喂养机器人

Similar Documents

Publication Publication Date Title
US20110317031A1 (en) Image pickup device
US20200151887A1 (en) Intelligent object tracking using a reflective light source
JP6946188B2 (ja) 複数技術奥行きマップ取得および融合のための方法および装置
WO2017157034A1 (zh) 一种二维码的识别方法及装置、存储介质
US20180367732A1 (en) Visual cues for managing image capture
CN1647535A (zh) 物体检测装置、物体检测服务器以及物体检测方法
WO2014203403A1 (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
CN105809647A (zh) 一种自动去雾拍照方法、装置和设备
WO2020224488A1 (zh) 图像处理方法、装置及设备
CN107635101A (zh) 拍摄方法、装置、存储介质和电子设备
CN202277385U (zh) 手术器械自动识别与计数系统
CN1697488A (zh) 具有亮度校正的数码相机
JPWO2016006304A1 (ja) 画像処理装置、撮像装置、画像処理方法及び画像処理プログラム
JP2019505868A (ja) 画像における動きの検出
CN110213480A (zh) 一种对焦方法及电子设备
CN109116298B (zh) 一种定位方法、存储介质及定位系统
CN112361990B (zh) 激光图案提取方法、装置、激光测量设备和系统
CN110830730B (zh) 电子装置中用于生成移动图像数据的设备和方法
CN106210517A (zh) 一种图像数据的处理方法、装置和移动终端
WO2019214643A1 (zh) 通过光通信装置对能够自主移动的机器进行导引的方法
US11930281B2 (en) Electronic device with camera and method thereof
US20180006724A1 (en) Multi-transmitter vlc positioning system for rolling-shutter receivers
WO2020000397A1 (zh) 电子设备识别激光器发射的激光点方法及电子设备
JP2020507854A (ja) 画像解析技法
CN110555456A (zh) 基于机器学习的视觉可见光定位led-id检测与识别方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18924888

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18924888

Country of ref document: EP

Kind code of ref document: A1