CN114364099A - Method for adjusting intelligent lighting equipment, robot and electronic equipment - Google Patents

Method for adjusting intelligent lighting equipment, robot and electronic equipment Download PDF

Info

Publication number
CN114364099A
CN114364099A CN202210039500.0A CN202210039500A CN114364099A CN 114364099 A CN114364099 A CN 114364099A CN 202210039500 A CN202210039500 A CN 202210039500A CN 114364099 A CN114364099 A CN 114364099A
Authority
CN
China
Prior art keywords
environment image
lighting equipment
intelligent lighting
acquiring
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210039500.0A
Other languages
Chinese (zh)
Other versions
CN114364099B (en
Inventor
高斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Robotics Co Ltd
Original Assignee
Cloudminds Robotics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Robotics Co Ltd filed Critical Cloudminds Robotics Co Ltd
Priority to CN202210039500.0A priority Critical patent/CN114364099B/en
Publication of CN114364099A publication Critical patent/CN114364099A/en
Priority to PCT/CN2023/072077 priority patent/WO2023134743A1/en
Application granted granted Critical
Publication of CN114364099B publication Critical patent/CN114364099B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • G03B15/02Illuminating scene
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The disclosure provides a method for adjusting intelligent lighting equipment, a robot and electronic equipment, wherein the method comprises the following steps: classifying the illumination mode generated by the intelligent lighting equipment through a convolutional neural network classifier; acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment; inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image; receiving a selection instruction of an illumination mode of the environment image; acquiring parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction; and adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting. By the method for adjusting the intelligent lighting equipment, the lighting equipment can be frequently adjusted without depending on manpower, and a proper illumination model is selected with higher efficiency and lower cost to achieve an expected lighting effect, so that a film-level lighting effect is realized, and a proper picture effect is finally obtained.

Description

Method for adjusting intelligent lighting equipment, robot and electronic equipment
Technical Field
The present disclosure relates to the field of intelligent control, and in particular, to a method for adjusting an intelligent lighting device, a robot, and an electronic device.
Background
With the continuous development of intelligent equipment, the control of the intelligent equipment is more and more intelligent. In reality, when people take a picture, the shooting light often hardly meets the best shooting condition, or the light is too dark, or the light is too bright, or the light is not uniform enough, and the like.
At present, when shooting film/short film, when touchhing scene light and unsatisfying the shooting condition, often need light engineer and photographer cooperation, the manual position of going to remove light source/reflector panel/board of inhaling, and the demand removes repeatedly, according to the scene of difference, the person in charge commander light engineer goes to adjust light position/angle/luminance, waste time and energy, and these lighting apparatus only support independent control usually, and independent lighting apparatus influences limitedly to the light effect of environment, cause extremely poor user experience.
Disclosure of Invention
Based on the problems provided above, how to frequently adjust lighting equipment without relying on a large amount of manpower, and select a proper illumination model with higher efficiency and lower cost to achieve an expected lighting effect, thereby realizing a movie-level lighting effect and finally obtaining a proper picture effect.
An object of an embodiment of the present invention is to provide a method for adjusting an intelligent lighting device, a robot and an electronic device, which are used for intelligently selecting a corresponding illumination mode for a user, so that the lighting effects of an environment and a target object are better, and the lighting effects of the intelligent lighting device are improved, and the above problems are solved.
To achieve the above object, in a first aspect, an embodiment of the present invention provides a method for adjusting a smart lighting device, including:
classifying the illumination mode generated by the intelligent lighting equipment through a convolutional neural network classifier;
acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image;
receiving a selection instruction of an illumination mode of the environment image;
acquiring parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction;
and adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting.
Further, the classifying the illumination pattern generated by the intelligent lighting device through the convolutional neural network classifier includes:
comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier;
calculating the parameter similarity of the environment image and the standard image through image parameters;
and marking the environment image with the parameter similarity larger than a first threshold value as an illumination mode corresponding to the standard image, and recording an image ID of the marked environment image.
Further, the image ID of the environment image corresponds to different illumination modes according to different lighting parameters when the robot shoots the environment image.
Further, acquiring an environment image through the robot, and constructing a digital twin scene according to the environment image and the intelligent lighting device, includes:
shooting an environment image through a vision sensor of the robot;
identifying intelligent lighting equipment in the environment image;
establishing a mapping relation between the intelligent lighting equipment in the digital twin scene and the intelligent lighting equipment in a real scene;
and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
Further, the inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image, includes:
inputting an environment image acquired by the robot into the convolutional neural network classifier;
identifying the environment image, and acquiring an image ID corresponding to the environment image;
and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
Further, before the step of receiving an instruction for selecting an illumination mode of the environment image, the method further includes:
establishing a corresponding relation between the illumination mode and the lighting parameters of the intelligent lighting equipment;
and generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
Further, the obtaining the parameter setting of the intelligent lighting device corresponding to the illumination mode according to the selection instruction includes:
acquiring an illumination mode selection instruction of a user through input or identification;
acquiring parameter setting corresponding to the illumination mode according to the mapping relation between the lighting parameters of the intelligent lighting equipment;
and acquiring the adjustable range of the parameter setting.
Further, the obtaining of the illumination mode selection instruction of the user through inputting or recognition includes:
acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; alternatively, the first and second electrodes may be,
and acquiring an illumination mode selection instruction of the user through voice recognition, gesture recognition, action recognition and/or expression recognition.
Further, the right carry out parameter adjustment to intelligent lighting equipment, include:
and carrying out position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or flow light adjustment on the intelligent lighting equipment.
Further, the method further comprises:
and performing parameter adjustment on the intelligent lighting equipment in the digital twin scene and mapping the parameter adjustment to the parameter adjustment of the intelligent lighting equipment in the real scene.
Further, the method further comprises:
identifying a target object in an environment image acquired by the robot;
and carrying out lighting and light distribution adjustment on the target object according to the selected lighting mode.
In a second aspect, an embodiment of the present disclosure provides an apparatus for adjusting an intelligent lighting device, including:
the classification module is used for classifying the illumination modes generated by the intelligent lighting equipment through a convolutional neural network classifier;
the scene construction module is used for acquiring an environment image through a robot and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
the illumination mode acquisition module is used for inputting the environment image into the convolutional neural network classifier and acquiring a plurality of illumination modes corresponding to the environment image;
the instruction module is used for receiving a selection instruction of the illumination mode of the environment image;
the parameter acquisition module is used for acquiring the parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction;
and the adjusting module is used for adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting.
In a third aspect, embodiments of the present disclosure provide a robot, including:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer readable instructions to cause the robot to implement the method according to any of the first aspects above.
In a fourth aspect, an embodiment of the present disclosure provides an electronic device, including:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the electronic device to implement the method of any of the first aspects.
In a fifth aspect, the disclosed embodiments provide a non-transitory computer-readable storage medium storing computer-readable instructions that, when executed by a computer, cause the computer to implement the method of any one of the above first aspects.
The embodiment of the disclosure discloses a method for adjusting intelligent lighting equipment, a robot and electronic equipment, wherein the method comprises the following steps: classifying the illumination mode generated by the intelligent lighting equipment through a convolutional neural network classifier; acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment; inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image; receiving a selection instruction of an illumination mode of the environment image; acquiring parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction; and adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting. Through the method for adjusting the light of the intelligent lighting equipment, the lighting equipment can be frequently adjusted without depending on manpower, and a proper illumination model is selected with higher efficiency and lower cost to achieve an expected lighting effect, so that a film-level lighting effect is realized, and a proper picture effect is finally obtained. Meanwhile, when the robot identifies the target, how to polish the object can ensure that the robot obtains more accurate identification rate and better shooting effect.
An object of an embodiment of the present invention is to provide a method, a robot, and an electronic device for adjusting an intelligent lighting device, so as to intelligently select a corresponding illumination mode for a user, so that the lighting effects of an environment and a target object are better, and the lighting effects of the intelligent lighting device are improved, and are suitable for shooting or recognition
The foregoing is a summary of the present disclosure, and for the purposes of promoting a clear understanding of the technical means of the present disclosure, the present disclosure may be embodied in other specific forms without departing from the spirit or essential attributes thereof.
Drawings
Fig. 1 is a schematic flow chart of a method for adjusting light of an intelligent lighting device according to an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a convolutional neural network classifier model provided in an embodiment of the present disclosure;
fig. 3 is a schematic view of an apparatus for adjusting light of an intelligent lighting device according to another embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
Detailed Description
In order to more clearly describe the technical content of the present disclosure, the following further description is given in conjunction with specific embodiments.
The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. The disclosed embodiments are described in detail below with reference to the accompanying drawings.
Based on the technical scheme of the embodiment of the disclosure, the following technical problems are solved:
how to frequently adjust lighting equipment without depending on a large amount of manpower, and select a proper illumination model with higher efficiency and lower cost so as to achieve an expected lighting effect, so that lighting with different numbers, positions and brightness corresponding to different face shapes, different timbres, different voices, different body actions, different depth of field levels of people and objects are realized, a movie-level lighting effect is realized, and a proper picture effect is finally obtained.
Meanwhile, how to polish the target object when the robot recognizes the target, so that the robot obtains a more accurate recognition rate?
Fig. 1 is a schematic flowchart of a method for adjusting light of an intelligent device according to an embodiment of the present disclosure, where the method provided in this embodiment may be executed by an electronic device or a robot and a control apparatus thereof, and the apparatus may be implemented as software, or implemented as a combination of software and hardware, and the apparatus may be integrated in a certain device in a control system, such as a terminal device. As shown in fig. 1, the method comprises the steps of:
step S101: and classifying the illumination mode generated by the intelligent lighting equipment through a convolutional neural network classifier.
In step S101, in the present disclosure, the smart device in a home, an office, or a public place is networked, and is connected to the server through a network cable, Wifi, or bluetooth. The server can acquire various data of the intelligent device through a network or an intelligent connection, wherein the data comprises the state and the position of the device, the service mode of the intelligent device, various adjustable parameters and the like. The server can acquire the GPS positioning information of the robot and the position of the intelligent device through the network. The robot is connected through the network with each intelligent lighting equipment in the environment, and intelligent lighting equipment includes and is not limited to: smart television, intelligent lamp, intelligent LED lamp, intelligence ceiling lamp, intelligent night-light, intelligent bedside lamp, intelligent desk lamp, intelligence pendent lamps, cell-phone, intelligent stereo set etc. above intelligent lighting equipment is only as the example, is not limited to this. For example, the robot can be a wheeled simulation robot, a floor sweeping robot, an intelligent sound box, a vending machine, various intelligent interaction devices and the like, and can also be an unmanned aerial vehicle, an intelligent automobile, a balance car and the like. Optionally, the intelligent lighting device is configured with sensors and communication technologies such as Wi-Fi/bluetooth MESH/UVW/ultrasonic wave/infrared, and can communicate with the robot/intelligent lighting device. The intelligent lighting equipment further comprises: intelligent light shading and/or reflecting devices, including but not limited to: intelligent shading cloth, intelligent light screen, etc. Smart lighting device modalities include, but are not limited to: the fixed position is not rotatable and movable, the fixed position can rotate, the position can be moved, the height can be adjusted up and down at the same time, the orientation can be rotated left and right at the same time, and the like. Smart lighting device parameter settings include, but are not limited to: adjusting the position of the device, adjusting the angle of the device, adjusting the height of the device, adjusting the color of the light, adjusting the brightness, adjusting the color temperature, streaming light, etc.
Specifically, classify the illumination mode that intelligent lighting equipment generated through convolutional neural network classifier, include: comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier; calculating the parameter similarity of the environment image and the standard image through image parameters; and marking the environment image with the parameter similarity larger than a first threshold value as an illumination mode corresponding to the standard image, and recording an image ID of the marked environment image. And the image ID of the environment image corresponds to different illumination modes according to different lighting parameters when the robot shoots.
Referring to fig. 2, a schematic diagram of a convolutional neural network classifier model provided by an embodiment of the present disclosure is shown, as shown in the figure, in the convolutional neural network classifier (model), each environment image corresponds to an image ID, such as image ID1, image IDs 2, …, and image Idn, each image ID has multiple lighting patterns, such as lighting pattern 1, lighting pattern 2, … …, and lighting pattern N, and at the same time, different image IDs may all have the same lighting pattern, such as lighting patterns in which both image ID1 and image ID3 have "lunglan light" effect. Each illumination mode corresponds to the device parameter setting of each intelligent lighting device, for example, the illumination mode 1 corresponds to the device parameter setting 1, the illumination mode 2 corresponds to the device parameter setting 2, … …, and the illumination mode N corresponds to the device parameter setting N in the figure. In the convolutional neural network classifier (model), a picture ID corresponding to an illumination mode is a reference picture, an environment image is compared with the reference picture, a reference image with the corresponding similarity higher than a certain threshold value is selected, the illumination mode is selected according to the reference image, and the device parameter setting corresponding to the illumination mode is used as a final adjustment target of the intelligent lighting device. In the training model, each intelligent lighting device can be respectively lighted to take a picture, and then different synthetic graphs are respectively synthesized by permutation and combination, so that the device parameter setting of the synthetic graphs is recorded. The synthesized picture can correspond to the picture ID corresponding to the reference picture to achieve the mode effect.
In this embodiment, the convolutional neural network classifier classifies through a deep learning algorithm model, and the model includes: the light source color brightness, the light source position, different people corresponding to different scenes, the lighting position and the light source brightness/color, corresponding description keywords and use scenes obtained by the object, for example: when the target character A writes a scene, 1 main light source red 2000 lumen is used on the light receiving surface, 1 auxiliary light source blue 500 lumen is used on the light receiving surface, annular light distribution is carried out, and a dramatic effect can be obtained; when the target character B is in a bust scene, 1 main light source and 1 light shielding plate are used for luneberg lighting, and a contrast effect can be obtained.
In an embodiment of the present disclosure, the illumination model includes: a flat/area light mode, a Paraymond light (butterfly light) mode, a ring light mode, a Luneberg light mode, an image-wise light mode, a split light (side light) mode, a wide light mode, a thin light mode, etc.
Specifically, the method comprises the following steps:
a) flat/area light mode: the light source is arranged beside the camera and is opposite to the shot object. This mode appears flat and does not make the shot look deeper, but can appear as a clean and simple picture.
b) Pai la meng light (butterfly light) mode: the light source is arranged above the camera, so that the photographed person can be overlooked from a high position. When the light source is struck from above, it will form a shadow under the nose, looking like a butterfly. The light distribution mode enables a shot to look like a good child in a lens, shadows of cheeks and chin are produced, cheekbones of the cheeks are more prominent, a face hole is made to look thinner, the chin is sharper, and the charm of a subject can be improved.
c) Annular light distribution mode: based on the Palman light, the light source is slightly higher than the eye level and 30-40 degrees to the camera (according to the individual face condition), and the light is still projected on the face of the photographed person by being dropped from the top. It will leave a shadow on the person whose face is slightly lower in neck, the nose projection will not be connected to the cheek shadow but will be directed slightly downwards, and the light source will not be too high, so that the eye will be lost, which can create more dramatic effects.
d) Luneberg light mode: the light source is placed at a high position, and the shot person is at a position of 45-60 degrees, and a small triangle is formed on one side of the face of the shot person. This makes it possible to obtain a picture with a high contrast effect, and to express with this angle that the subject is experiencing the darkest periods in life. If the luneberg light is too dark for you, a reflector can be added to weaken the shadow. Unlike circular lighting, the nose is connected to the shadow of the cheek, but more importantly the eyes that are shaded, still have a bright appearance and the picture is dramatic.
e) Image connection mode: when shooting, the object needs to be slightly rotated away from the light source, and the light source is also higher than the head, so that the nose image is connected with the cheek image. However, not all people are suitable for the light distribution mode, people with large cheekbones are ideal, and people with not high nose bridges are difficult to distribute light.
f) Split lighting (one-sided light and shadow) mode: the light source is placed at 90 degrees to the left or right of the subject and can be moved slightly forward or backward to account for the different surface shapes. The light distribution needs to be changed along with the face of the object, and the light should follow when the head turns. In the mode, the face is divided into two parts, one part is bright, the other part is dark, a stronger dramatic feeling can be produced, the face is suitable for characters with stronger personality or quality, such as artists, musicians and the like, and the strong flavor of the face is also ensured. And can also be used to represent secret hidden by the photographed person or unknown dark surface.
g) Display width light mode: this is not a specific lighting setting but a style, whether segmented, circular or luneberg, may be used. The method is simple, namely, the side receiving light is turned to the lens, so that the side receiving light looks wide, and then the whole face looks large and wide, and is suitable for people with thin faces.
h) A slimming light mode: the darker side faces the lens, which is opposite to the wide and bright side, so that the face looks sharp and has more stereoscopic impression and atmosphere.
Step S102: an environment image is obtained through a robot, and a digital twin scene is constructed according to the environment image and the intelligent lighting equipment.
In step S102, in the present disclosure, a vision sensor, such as an image camera, a depth camera, a laser radar and/or an ultrasonic wave, is disposed on the robot, wherein the image camera is used for taking pictures or shooting pictures, and an environment image or a target image that the robot wants to capture is captured in real time. The robot is connected with each intelligent lighting device in the environment through a network. For example, the robot can be a wheeled simulation robot, a floor sweeping robot, an intelligent sound box, a vending machine, various intelligent interaction devices and the like, and can also be an unmanned aerial vehicle, an intelligent automobile, a balance car and the like. Through the vision sensor of the robot, the environment image of the environment where the robot is located can be collected, and meanwhile, the target object in the image can be identified.
Specifically, acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting device includes: shooting an environment image through a vision sensor of the robot; identifying intelligent lighting equipment in the environment image; establishing a mapping relation between the intelligent lighting equipment in the digital twin scene and the intelligent lighting equipment in a real scene; and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
Step S103: and inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image.
In step S103, in the embodiment of the present disclosure, a section of unregistered movie/in-camera shot is input in the deep learning algorithm model for detection, and an illumination pattern is output, where the pattern includes: different people correspond to different scenes, lighting positions and light source brightness and colors, and corresponding mode description keywords, such as the avatar mode.
Specifically, the inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image includes: inputting an environment image acquired by the robot into the convolutional neural network classifier; identifying the environment image, and acquiring an image ID corresponding to the environment image; and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
Step S104: and receiving a selection instruction of the illumination mode of the environment image.
In step S104, in the embodiment of the present disclosure, the robot configures a smart camera, a laser radar, and/or a depth camera, and acquires the area and the 3D shape of each target object or person in the scene through the laser radar and the depth camera. The illumination mode selection instruction of the user can be obtained by inputting or identifying, and specifically includes: acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; or acquiring the illumination mode selection instruction of the user through voice recognition, gesture recognition, motion recognition and/or expression recognition.
Specifically, the method comprises the following steps:
1. the automatic control mode one: adjusting lighting by image recognition of facial shape or mood:
a) the camera is provided with a depth camera and a laser radar, so that facial expressions of five sense organs, limb actions and gesture changes can be captured in real time;
b) inputting a deep learning model;
c) and adjusting the change of the light according to the deep learning model (the change range is configurable).
2. And the automatic control mode II comprises the following steps: by timbre, or emotion in the speech:
a) at least one microphone for acquiring human voice in real time;
b) a model of deep learning is input, including mood in the corresponding utterance, to adjust the display of different lights.
3. The first manual control mode: digital twinning
a) Acquiring the position of surrounding intelligent equipment through a camera and/or a depth camera in front of a target person or object, and mapping the position to a digital twin scene;
b) or the camera and the intelligent device sense the position of the other party through UWB/ultrasonic waves and map the position to a digital twin scene;
c) in a digital twin three-dimensional space, the position of a robot/intelligent device/person can be seen, the intelligent device is adjusted by clicking and dragging on a screen, and the positions of a light source and a light shielding plate of the intelligent device can be adjusted to control light; or clicking a light area around the shot target in the screen to adjust the brightness and the position of the multiple lamplights in a linkage manner.
4. And a second manual control mode: internet of things
a) Through screen media such as pronunciation/screen, draw intelligent (window) curtain up, the light screen blocks the natural light window, utilizes spotlight beam light.
5. And a second manual control mode: speech input
a) In the deep learning model, the light effect corresponds to the expression of various voices.
b) The voice input needs the effect of 'Lunebergun light', the cool effect and the effect of 'apple publishing meeting'.
6. And (3) key input control:
by arranging the illumination mode key on the robot or the terminal equipment, a user can input and select through the key to generate a selection instruction.
7. Touch screen input control:
by setting an illumination mode touch screen key or area on the robot or the terminal device, a user can select a touch screen through the touch screen key or area to generate a selection instruction.
Step S105: and acquiring the parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction.
In step S105, in the embodiment of the present disclosure, each lighting mode corresponds to a parameter setting of at least one smart lighting device.
Specifically, obtaining the parameter setting of the intelligent lighting device corresponding to the illumination mode according to the selection instruction includes: acquiring an illumination mode selection instruction of a user through input or identification; acquiring parameter setting corresponding to the illumination mode according to the mapping relation between the lighting parameters of the intelligent lighting equipment; and acquiring the adjustable range of the parameter setting.
In the embodiment of the present disclosure, the camera used for shooting is an adjustable camera, and the parameters include but are not limited to: ISO, Shutter, EV, aperture, focal length, inboard luminous flux are bright, set up light source equipment on the camera, and the position of camera, illumination lumen are adjustable.
In the embodiments of the present disclosure, the adjustable parameters of the intelligent light source include, but are not limited to: turning on/off/projecting range/angle/brightness and color of light source, in digital twin world, turning on intelligent light source to obtain lumen, making it be directed toward object to obtain illuminated area, obtaining light receiving degree of object surface, and using luminous flux received by unit area to express, E represents illuminance, S represents area, F represents luminous flux, i.e. E (illuminance) ═ F (lumen)/S (square meter)
Alternatively, the material/light reflectance and physical properties of the adjustable object may be recorded and selected.
From the environment image, in the real world, there is an intelligent light meter that obtains an illuminance value with the subject position facing the light source. The brightness of other lamps is adjusted according to the light ratio value, corresponding parameters are also adjustable in the virtual scene, and the light receiving area corresponding to the light and shade ladder projected to the object by the light source can be simulated.
Step S106: and adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting.
In step S106, in the embodiment of the present disclosure, the adjusting parameters of the smart lighting device includes, but is not limited to: and carrying out position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or flow light adjustment on the intelligent lighting equipment.
The light ratio is adjusted and controlled, and the light ratio determines the brightness and contrast of the picture, so that different tone color forms and different modeling effects and artistic atmospheres are formed. Optical ratio measurements include, but are not limited to:
a) the ratio of luminance values between different light sources in the same scene, or the ratio of luminance values of the light-receiving part to the shadow, projected part of the surface of an object with the same reflectivity
b) The ratio of brightness to darkness between the surfaces of different reflectivities of adjacent parts in the scenery, such as the people and background in the scenery, the human face and clothes, etc. under the irradiation of the same light source.
c) The ratio of luminance values or luminance values between the highest luminance and lowest luminance locations in the scene.
Specifically, it is right intelligent lighting equipment carries out parameter adjustment, includes: and carrying out position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or flow light adjustment on the intelligent lighting equipment.
In addition, before the step of receiving an instruction to select an illumination mode of the environment image, the method further includes: establishing a corresponding relation between the illumination mode and the lighting parameters of the intelligent lighting equipment; and generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
In addition, the method for adjusting the intelligent lighting equipment further comprises the following steps: and performing parameter adjustment on the intelligent lighting equipment in the digital twin scene and mapping the parameter adjustment to the parameter adjustment of the intelligent lighting equipment in the real scene.
In addition, the method for adjusting the intelligent lighting equipment further comprises the following steps: identifying a target object in an environment image acquired by the robot; and carrying out lighting and light distribution adjustment on the target object according to the selected lighting mode.
Fig. 3 is a schematic diagram of an apparatus for adjusting an intelligent lighting device according to another embodiment of the present disclosure. The device for the interaction of the robot and the intelligent equipment comprises: a classification module 301, a scene construction module 302, an illumination mode acquisition module 303, an instruction module 304, a parameter acquisition module 305, and an adjustment module 306. Wherein:
the classification module 301 is configured to classify the illumination mode generated by the intelligent lighting device through a convolutional neural network classifier.
Specifically, the classification module is specifically configured to: comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier; calculating the parameter similarity of the environment image and the standard image through image parameters; and marking the environment image with the parameter similarity larger than a first threshold value as an illumination mode corresponding to the standard image, and recording an image ID of the marked environment image. And the image ID of the environment image corresponds to different illumination modes according to different lighting parameters when the robot shoots.
The scene construction module 302 is configured to obtain an environment image through a robot, and construct a digital twin scene according to the environment image and the intelligent lighting device.
In the embodiment of the disclosure, the robot is provided with a vision sensor, such as an image camera, a depth camera, a laser radar and/or an ultrasonic wave, and the like, wherein the image camera is used for taking pictures or shooting pictures, and collecting an environment image or a target image which the robot wants to collect in real time. The robot is connected with each intelligent lighting device in the environment through a network. For example, the robot can be a wheeled simulation robot, a floor sweeping robot, an intelligent sound box, a vending machine, various intelligent interaction devices and the like, and can also be an unmanned aerial vehicle, an intelligent automobile, a balance car and the like. Through the vision sensor of the robot, the environment image of the environment where the robot is located can be collected, and meanwhile, the target object in the image can be identified.
The scene construction module is specifically configured to include: shooting an environment image through a vision sensor of the robot; identifying intelligent lighting equipment in the environment image; establishing a mapping relation between the intelligent lighting equipment in the digital twin scene and the intelligent lighting equipment in a real scene; and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
The illumination mode obtaining module 303 is configured to input the environment image into the convolutional neural network classifier, and obtain a plurality of illumination modes corresponding to the environment image.
In the embodiment of the disclosure, a section of non-recorded movie/shooting shot is input in a deep learning algorithm model for detection, and an illumination mode is output, wherein the mode comprises: different people correspond to different scenes, lighting positions and light source brightness and colors, and corresponding mode description keywords, such as the avatar mode.
Specifically, the illumination mode obtaining module is specifically configured to: inputting an environment image acquired by the robot into the convolutional neural network classifier; identifying the environment image, and acquiring an image ID corresponding to the environment image; and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
The instruction module 304 is configured to receive a selection instruction of an illumination mode of the environment image.
In the instruction module, the selection instruction is obtained through input or identification, and specifically includes: acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; or acquiring the illumination mode selection instruction of the user through voice recognition, gesture recognition, motion recognition and/or expression recognition.
The parameter obtaining module 305 is configured to obtain the parameter setting of the intelligent lighting device corresponding to the illumination mode according to the selection instruction.
Specifically, the parameter obtaining module is specifically configured to: acquiring an illumination mode selection instruction of a user through input or identification; acquiring parameter setting corresponding to the illumination mode according to the mapping relation between the lighting parameters of the intelligent lighting equipment; and acquiring the adjustable range of the parameter setting.
The adjusting module 306 is configured to perform parameter adjustment on the intelligent lighting device in the digital twin scene according to the parameter setting.
Specifically, the adjusted module is specifically configured to: and carrying out position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or flow light adjustment on the intelligent lighting equipment.
Furthermore, the apparatus further comprises:
and the mapping module is used for performing parameter adjustment mapping on the intelligent lighting equipment in the digital twin scene to the parameter adjustment of the intelligent lighting equipment in the real scene.
Furthermore, the apparatus further comprises:
the relation establishing module is used for establishing a corresponding relation between the illumination mode and the lighting parameters of the intelligent lighting equipment;
and the instruction generating module is used for generating the control instruction of the intelligent lighting equipment according to the corresponding relation.
The device further comprises:
the identification module is used for identifying a target object in an environment image acquired by the robot;
and the target light adjusting module is used for polishing and light distribution adjusting the target object according to the selected illumination mode.
The apparatus shown in fig. 3 can perform the method of the embodiment shown in fig. 1, and reference may be made to the related description of the embodiment shown in fig. 1 for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the embodiment shown in fig. 1, and are not described herein again.
Referring now to FIG. 4, shown is a schematic diagram of an electronic device 400 suitable for use in implementing another embodiment of the present disclosure. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 4 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 4, electronic device 400 may include a processing device (e.g., central processing unit, graphics processor, etc.) 401 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)402 or a program loaded from a storage device 408 into a Random Access Memory (RAM) 403. In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are also stored. The processing device 401, the ROM 402, and the RAM 403 are connected to each other via a communication line 404. An input/output (I/O) interface 405 is also connected to the communication line 404.
Generally, the following devices may be connected to the I/O interface 405: input devices 406 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 407 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 408 including, for example, tape, hard disk, etc.; and a communication device 409. The communication means 409 may allow the electronic device 400 to communicate wirelessly or by wire with other devices to exchange data. While fig. 4 illustrates an electronic device 400 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 409, or from the storage device 408, or from the ROM 402. The computer program performs the above-described functions defined in the methods of the embodiments of the present disclosure when executed by the processing device 401.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: the interaction method in the above embodiment is performed.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, there is provided an electronic device including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of the preceding first aspects.
According to one or more embodiments of the present disclosure, there is provided a non-transitory computer-readable storage medium characterized by storing computer instructions for causing a computer to perform the method of any of the preceding first aspects.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (14)

1. A method of adjusting a smart lighting device, comprising:
classifying the illumination mode generated by the intelligent lighting equipment through a convolutional neural network classifier;
acquiring an environment image through a robot, and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
inputting the environment image into the convolutional neural network classifier, and acquiring a plurality of illumination modes corresponding to the environment image;
receiving a selection instruction of an illumination mode of the environment image;
acquiring parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction;
and adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting.
2. The method of claim 1, wherein the classifying the lighting patterns generated by the smart light device through a convolutional neural network classifier comprises:
comparing the environment image with a standard image corresponding to the illumination mode of the classification mark in the convolutional neural network classifier;
calculating the parameter similarity of the environment image and the standard image through image parameters;
and marking the environment image with the parameter similarity larger than a first threshold value as an illumination mode corresponding to the standard image, and recording an image ID of the marked environment image.
3. The method of claim 2, wherein the image ID of the environment image corresponds to different lighting modes according to different lighting parameters when the robot shoots.
4. The method of claim, wherein the acquiring an environment image by a robot and constructing a digital twin scene from the environment image and the smart lighting device comprises:
shooting an environment image through a vision sensor of the robot;
identifying intelligent lighting equipment in the environment image;
establishing a mapping relation between the intelligent lighting equipment in the digital twin scene and the intelligent lighting equipment in a real scene;
and constructing a digital twin scene according to the mapping relation between the environment image and the intelligent lighting equipment.
5. The method according to any one of claims 1 to 4, wherein the inputting the environment image into the convolutional neural network classifier and acquiring a plurality of illumination patterns corresponding to the environment image comprises:
inputting an environment image acquired by the robot into the convolutional neural network classifier;
identifying the environment image, and acquiring an image ID corresponding to the environment image;
and acquiring a plurality of illumination modes corresponding to the environment image according to the image ID.
6. The method according to claim 1, wherein prior to the step of receiving an instruction for selection of an illumination mode of the environment image, the method further comprises:
establishing a corresponding relation between the illumination mode and the lighting parameters of the intelligent lighting equipment;
and generating a control instruction of the intelligent lighting equipment according to the corresponding relation.
7. The method according to claim 1 or 5, wherein the obtaining of the parameter setting of the smart lighting device corresponding to the illumination mode according to the selection instruction comprises:
acquiring an illumination mode selection instruction of a user through input or identification;
acquiring parameter setting corresponding to the illumination mode according to the mapping relation between the lighting parameters of the intelligent lighting equipment;
and acquiring the adjustable range of the parameter setting.
8. The method according to claim 7, wherein the obtaining of the user's lighting pattern selection instruction by inputting or recognizing comprises:
acquiring an illumination mode selection instruction of a user through key input and/or touch screen input; alternatively, the first and second electrodes may be,
and acquiring an illumination mode selection instruction of the user through voice recognition, gesture recognition, action recognition and/or expression recognition.
9. The method of claim 1, wherein the performing parameter adjustments on the smart lighting device comprises:
and carrying out position adjustment, angle adjustment, height adjustment, light color adjustment, brightness adjustment, color temperature adjustment, light ratio adjustment and/or flow light adjustment on the intelligent lighting equipment.
10. The method of claim 1, further comprising:
and performing parameter adjustment on the intelligent lighting equipment in the digital twin scene and mapping the parameter adjustment to the parameter adjustment of the intelligent lighting equipment in the real scene.
11. The method of claim 1, further comprising:
identifying a target object in an environment image acquired by the robot;
and carrying out lighting and light distribution adjustment on the target object according to the selected lighting mode.
12. An apparatus for adjusting intelligent lighting equipment, comprising:
the classification module is used for classifying the illumination modes generated by the intelligent lighting equipment through a convolutional neural network classifier;
the scene construction module is used for acquiring an environment image through a robot and constructing a digital twin scene according to the environment image and the intelligent lighting equipment;
the illumination mode acquisition module is used for inputting the environment image into the convolutional neural network classifier and acquiring a plurality of illumination modes corresponding to the environment image;
the instruction module is used for receiving a selection instruction of the illumination mode of the environment image;
the parameter acquisition module is used for acquiring the parameter setting of the intelligent lighting equipment corresponding to the illumination mode according to the selection instruction;
and the adjusting module is used for adjusting the parameters of the intelligent lighting equipment in the digital twin scene according to the parameter setting.
13. A robot, comprising:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the robot to implement the method of any of claims 1-11.
14. An electronic device, comprising:
at least one memory for storing computer-readable instructions; and
at least one processor configured to execute the computer-readable instructions to cause the electronic device to implement the method of any one of claims 1-11.
CN202210039500.0A 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment Active CN114364099B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202210039500.0A CN114364099B (en) 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment
PCT/CN2023/072077 WO2023134743A1 (en) 2022-01-13 2023-01-13 Method for adjusting intelligent lamplight device, and robot, electronic device, storage medium and computer program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210039500.0A CN114364099B (en) 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment

Publications (2)

Publication Number Publication Date
CN114364099A true CN114364099A (en) 2022-04-15
CN114364099B CN114364099B (en) 2023-07-18

Family

ID=81109508

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210039500.0A Active CN114364099B (en) 2022-01-13 2022-01-13 Method for adjusting intelligent light equipment, robot and electronic equipment

Country Status (2)

Country Link
CN (1) CN114364099B (en)
WO (1) WO2023134743A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114913310A (en) * 2022-06-10 2022-08-16 广州澄源电子科技有限公司 LED virtual scene light control method
CN116073446A (en) * 2023-03-07 2023-05-05 天津天元海科技开发有限公司 Intelligent power supply method and device based on lighthouse multi-energy environment integrated power supply system
WO2023134743A1 (en) * 2022-01-13 2023-07-20 达闼机器人股份有限公司 Method for adjusting intelligent lamplight device, and robot, electronic device, storage medium and computer program
CN117042253A (en) * 2023-07-11 2023-11-10 昆山恩都照明有限公司 Intelligent LED lamp, control system and method

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117202430B (en) * 2023-09-20 2024-03-19 浙江炯达能源科技有限公司 Energy-saving control method and system for intelligent lamp post
CN116963357B (en) * 2023-09-20 2023-12-01 深圳市靓科光电有限公司 Intelligent configuration control method, system and medium for lamp
CN117615490A (en) * 2023-11-10 2024-02-27 深圳市卡能光电科技有限公司 Control parameter adjusting method and system for master control atmosphere lamp and slave control atmosphere lamp

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205693941U (en) * 2016-06-17 2016-11-16 合肥三川自控工程有限责任公司 Multifunctional fire-fighting emergency lighting and evacuation light fixture and intelligence control system
CN108805919A (en) * 2018-05-23 2018-11-13 Oppo广东移动通信有限公司 Light efficiency processing method, device, terminal and computer readable storage medium
WO2018219294A1 (en) * 2017-06-02 2018-12-06 广东野光源眼科技有限公司 Information terminal
WO2019090503A1 (en) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Image capturing method and image capturing system for intelligent terminal
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
WO2020056768A1 (en) * 2018-09-21 2020-03-26 Nokia Shanghai Bell Co., Ltd. Mirror
CN111182233A (en) * 2020-01-03 2020-05-19 宁波方太厨具有限公司 Control method and system for automatic light supplement of shooting space
CN111586941A (en) * 2020-04-24 2020-08-25 苏州华普物联科技有限公司 Intelligent illumination control method based on neural network algorithm
IT201900011304A1 (en) * 2019-07-10 2021-01-10 Rebernig Supervisioni Srl Adaptive Lighting Control Method and Adaptive Lighting System
WO2021098191A1 (en) * 2019-11-21 2021-05-27 天津九安医疗电子股份有限公司 Method for automatically adjusting illumination level of target scene, and intelligent illumination control system
CN113711578A (en) * 2019-04-22 2021-11-26 恒久礼品股份有限公司 Intelligent toilet mirror loudspeaker system
CN113900384A (en) * 2021-10-13 2022-01-07 达闼科技(北京)有限公司 Method and device for interaction between robot and intelligent equipment and electronic equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10465882B2 (en) * 2011-12-14 2019-11-05 Signify Holding B.V. Methods and apparatus for controlling lighting
US11232502B2 (en) * 2017-12-20 2022-01-25 Signify Holding B.V. Lighting and internet of things design using augmented reality
US20230284361A1 (en) * 2020-06-04 2023-09-07 Signify Holding B.V. A method of configuring a plurality of parameters of a lighting device
CN112492224A (en) * 2020-11-16 2021-03-12 广州博冠智能科技有限公司 Adaptive scene light supplement method and device for video camera
CN113824884B (en) * 2021-10-20 2023-08-08 深圳市睿联技术股份有限公司 Shooting method and device, shooting equipment and computer readable storage medium
CN114364099B (en) * 2022-01-13 2023-07-18 达闼机器人股份有限公司 Method for adjusting intelligent light equipment, robot and electronic equipment

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205693941U (en) * 2016-06-17 2016-11-16 合肥三川自控工程有限责任公司 Multifunctional fire-fighting emergency lighting and evacuation light fixture and intelligence control system
WO2018219294A1 (en) * 2017-06-02 2018-12-06 广东野光源眼科技有限公司 Information terminal
WO2019090503A1 (en) * 2017-11-08 2019-05-16 深圳传音通讯有限公司 Image capturing method and image capturing system for intelligent terminal
CN111316633A (en) * 2017-11-08 2020-06-19 深圳传音通讯有限公司 Image shooting method and image shooting system of intelligent terminal
CN108805919A (en) * 2018-05-23 2018-11-13 Oppo广东移动通信有限公司 Light efficiency processing method, device, terminal and computer readable storage medium
WO2020056768A1 (en) * 2018-09-21 2020-03-26 Nokia Shanghai Bell Co., Ltd. Mirror
CN113711578A (en) * 2019-04-22 2021-11-26 恒久礼品股份有限公司 Intelligent toilet mirror loudspeaker system
CN110248450A (en) * 2019-04-30 2019-09-17 广州富港万嘉智能科技有限公司 A kind of combination personage carries out the method and device of signal light control
IT201900011304A1 (en) * 2019-07-10 2021-01-10 Rebernig Supervisioni Srl Adaptive Lighting Control Method and Adaptive Lighting System
WO2021098191A1 (en) * 2019-11-21 2021-05-27 天津九安医疗电子股份有限公司 Method for automatically adjusting illumination level of target scene, and intelligent illumination control system
CN111182233A (en) * 2020-01-03 2020-05-19 宁波方太厨具有限公司 Control method and system for automatic light supplement of shooting space
CN111586941A (en) * 2020-04-24 2020-08-25 苏州华普物联科技有限公司 Intelligent illumination control method based on neural network algorithm
CN113900384A (en) * 2021-10-13 2022-01-07 达闼科技(北京)有限公司 Method and device for interaction between robot and intelligent equipment and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张涛;胡孟阳;杜文丽;王昊;: "一种新型可调节的自适应区域调光方法", 激光与光电子学进展 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023134743A1 (en) * 2022-01-13 2023-07-20 达闼机器人股份有限公司 Method for adjusting intelligent lamplight device, and robot, electronic device, storage medium and computer program
CN114913310A (en) * 2022-06-10 2022-08-16 广州澄源电子科技有限公司 LED virtual scene light control method
CN116073446A (en) * 2023-03-07 2023-05-05 天津天元海科技开发有限公司 Intelligent power supply method and device based on lighthouse multi-energy environment integrated power supply system
CN116073446B (en) * 2023-03-07 2023-06-02 天津天元海科技开发有限公司 Intelligent power supply method and device based on lighthouse multi-energy environment integrated power supply system
CN117042253A (en) * 2023-07-11 2023-11-10 昆山恩都照明有限公司 Intelligent LED lamp, control system and method

Also Published As

Publication number Publication date
WO2023134743A1 (en) 2023-07-20
CN114364099B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN114364099B (en) Method for adjusting intelligent light equipment, robot and electronic equipment
US10952296B2 (en) Lighting system and method
US11381778B2 (en) Hybrid texture map to be used during 3D video conferencing
CN108764091A (en) Biopsy method and device, electronic equipment and storage medium
CN110248450B (en) Method and device for controlling light by combining people
CN109416842A (en) Geometric match in virtual reality and augmented reality
CN108846377A (en) Method and apparatus for shooting image
US20220286657A1 (en) Virtual 3d communications with participant viewpoint adjustment
CN108701355A (en) GPU optimizes and the skin possibility predication based on single Gauss online
CN108648061A (en) image generating method and device
US10261749B1 (en) Audio output for panoramic images
TWI672948B (en) System and method for video production
CN109993835A (en) A kind of stage interaction method, apparatus and system
CN109618088A (en) Intelligent camera system and method with illumination identification and reproduction capability
CN116055800A (en) Method for mobile terminal to obtain customized background real-time dance video
US11425313B1 (en) Increasing dynamic range of a virtual production display
US20220116531A1 (en) Programmable rig control for three-dimensional (3d) reconstruction
CN110136239B (en) Method for enhancing illumination and reflection reality degree of virtual reality scene
CN113411513A (en) Intelligent light adjusting method and device based on display terminal and storage medium
Gaddy Media design and technology for live entertainment: Essential tools for video presentation
CN116471437A (en) Method, device, equipment and storage medium for adjusting playing atmosphere of intelligent glasses
CN115686194A (en) Method, system and device for real-time visualization and interaction of virtual images
KR20230029078A (en) Face image outputting device capable of moving head
WO2023094875A1 (en) Increasing dynamic range of a virtual production display
WO2023094880A1 (en) Increasing dynamic range of a virtual production display

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 201111 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant after: Dayu robot Co.,Ltd.

Address before: 200245 Building 8, No. 207, Zhongqing Road, Minhang District, Shanghai

Applicant before: Dalu Robot Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant