CN116485886A - Lamp synchronization method, device, equipment and storage medium - Google Patents

Lamp synchronization method, device, equipment and storage medium Download PDF

Info

Publication number
CN116485886A
CN116485886A CN202310002163.2A CN202310002163A CN116485886A CN 116485886 A CN116485886 A CN 116485886A CN 202310002163 A CN202310002163 A CN 202310002163A CN 116485886 A CN116485886 A CN 116485886A
Authority
CN
China
Prior art keywords
lamp
virtual
dimensional code
pose
pose information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310002163.2A
Other languages
Chinese (zh)
Inventor
吴卓莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202310002163.2A priority Critical patent/CN116485886A/en
Publication of CN116485886A publication Critical patent/CN116485886A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0025Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement consisting of a wireless interrogation device in combination with a device for optically marking the record carrier
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a lamp synchronization method, device, equipment and storage medium, which are used for improving the accuracy of calibrating a virtual lamp and improving the synchronization effect of the virtual lamp and a real lamp. The method of the embodiment of the application comprises the following steps: allocating a unique corresponding lamp detection identifier for each lamp in a target scene, wherein each lamp detection identifier corresponds to a lamp model and lamp information, acquiring an identifier detection image corresponding to each lamp detection identifier, acquiring first pose information of each lamp detection identifier in the target scene based on each identifier detection image, converting the first pose information into second pose information corresponding to each lamp detection identifier in a three-dimensional virtual space coordinate system, calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection identifier, obtaining a calibrated lamp calibration pose, and synchronously adjusting the lamps in the target scene based on the lamp calibration pose.

Description

Lamp synchronization method, device, equipment and storage medium
Technical Field
The embodiment of the application relates to the technical field of lamp control, in particular to a lamp synchronization method, device, equipment and storage medium.
Background
Along with development of science and technology, in the process of performing film shooting or stage performance, the virtual scene and the real scene often need to have the same light atmosphere, and in the prior art, real light is usually adjusted through manual operation of some special staff, so that shooting effects or stage effects and the like can be perfectly presented.
However, the lamp is different from other common objects, the light has a self-luminous characteristic, and the light has various types, and can be divided into parallel light, spot light, laser, panel light and the like from the aspect of generating light effect. The lamplight has multiple attributes, such as brightness, chromaticity, color temperature and the like, and the point light source also has an angular angle and the like.
In the prior art, most of lamplight matching work is usually performed by matching common luminance, chromaticity and color temperature parameters of lamplight, but in practical application, the influence of different positions and postures of a lamp on a shadow effect generated by striking the lamplight on an object is very large, and it can be understood that the more concentrated the light source is, the larger the influence of the pose of the lamp on the shadow effect is presented and the lens is. However, since the lamps are installed manually by manpower, the manual measurement is inaccurate and inefficient, the lamps are labor-consuming and error-prone, and in the process of film shooting or stage performance, more than ten lamps and more than ten lamps often work in the same space, the lighted atmosphere is difficult to observe in real time by manpower, and the light effect is changed rapidly, so that the difficulty of synchronizing the dynamic light effect of virtual light and real light is increased.
Disclosure of Invention
The embodiment of the application provides a lamp synchronization method, device, equipment and storage medium, which are used for calibrating a virtual lamp by acquiring real pose information, namely first pose information, of each lamp detection mark in a target scene before formally using the lamp in the target scene for lighting, converting the real pose information into second pose information corresponding to each lamp detection mark in a three-dimensional virtual space coordinate system, namely adjusting the virtual lamp to be consistent with the pose state of the lamp in the target scene, and realizing synchronous adjustment of the lamp in the target scene and the virtual lamp based on the lamp calibration pose when formally using the lamp in the target scene for lighting, thereby improving the synchronous effect of the virtual lamp and the real lamp.
In one aspect, an embodiment of the present application provides a method for synchronizing a lamp, including:
allocating a unique corresponding lamp detection identifier for each lamp in the target scene, wherein each lamp detection identifier corresponds to one lamp model and lamp information;
acquiring an identification detection image corresponding to each lamp detection identification;
acquiring first pose information of each lamp detection mark in a target scene based on each mark detection image;
Converting the first pose information into second pose information corresponding to each lamp detection mark in the three-dimensional virtual space coordinate system;
calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection mark to obtain a calibrated lamp calibration pose;
and based on the lamp calibration pose, synchronously adjusting the lamps in the target scene.
Another aspect of the present application provides a device for synchronizing a lamp, including:
the processing unit is used for distributing a unique corresponding lamp detection identifier to each lamp in the target scene, wherein each lamp detection identifier corresponds to one lamp model and lamp information;
the acquisition unit is used for acquiring an identification code detection image corresponding to each lamp detection identification;
the acquisition unit is also used for acquiring first pose information of each lamp detection mark in the target scene based on each mark detection image;
the processing unit is also used for converting the first pose information into second pose information corresponding to each lamp detection mark in the three-dimensional virtual space coordinate system;
the processing unit is further used for calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection mark to obtain a calibrated lamp calibration pose;
And the control unit is used for synchronously adjusting the lamps in the target scene based on the lamp calibration pose.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the processing unit may specifically be configured to:
according to the lamp model and the lamp information, initial pose information corresponding to the virtual lamp is obtained;
and based on the second pose information, calibrating the initial pose information corresponding to the virtual lamp to obtain the lamp calibration pose.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the processing unit may specifically be configured to:
comparing the second pose information with initial pose information corresponding to the virtual lamp to obtain a comparison result;
if the comparison result is that the information is consistent, the initial pose information corresponding to the virtual lamp is used as the lamp calibration pose corresponding to the virtual lamp;
and if the comparison result is that the information is inconsistent, replacing the initial pose information corresponding to the virtual lamp with the second pose information to obtain the lamp calibration pose corresponding to the virtual lamp.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the processing unit may specifically be configured to:
Acquiring the total number of lamps in a target scene, and acquiring a two-dimensional code set corresponding to the total number of lamps from a two-dimensional code dictionary;
based on the two-dimensional code set, distributing a unique corresponding two-dimensional code and a two-dimensional code identifier corresponding to each two-dimensional code to each lamp;
the acquisition unit may specifically be configured to: collecting two-dimensional code detection images corresponding to each two-dimensional code;
the acquisition unit may specifically be configured to: acquiring first pose information of each two-dimensional code in the target scene based on each two-dimensional code detection image;
the processing unit may be specifically configured to: and converting the first pose information into second pose information corresponding to each two-dimensional code in a three-dimensional virtual space coordinate system.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the processing unit may specifically be configured to:
classifying lamps based on lamp models and lamp information to obtain P lamp categories and lamp sets corresponding to each lamp category, wherein P is an integer greater than or equal to 1;
dividing the two-dimensional code set into P two-dimensional code subsets based on P lamp categories and lamp sets corresponding to each lamp category;
And distributing each two-dimensional code in the P two-dimensional subsets and the two-dimensional code identifier corresponding to each two-dimensional code to lamps in the P lamp sets one by one.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the obtaining unit may specifically be configured to:
installing the two-dimensional code corresponding to each lamp to the corresponding lamp in the target scene to obtain a two-dimensional code detection point corresponding to each lamp;
and shooting the two-dimensional code of each two-dimensional code detection point in the target scene at the acquisition moment to obtain a two-dimensional code detection image corresponding to each two-dimensional code.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the obtaining unit may specifically be configured to:
acquiring a first coordinate position corresponding to the two-dimensional code in each two-dimensional code detection image;
and calculating a first gesture corresponding to each two-dimensional code based on the size of the two-dimensional code installed at each two-dimensional code detection point.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the processing unit may specifically be configured to:
acquiring a shooting origin coordinate position in a target scene and a virtual origin coordinate position in a three-dimensional virtual space coordinate system;
The first coordinate position and the first pose are converted into the second coordinate position and the second pose based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position, and the photographing origin coordinate position.
In one possible design, in one implementation of another aspect of the embodiments of the present application, the control unit may specifically be configured to:
if an adjustment instruction of the virtual lamp is received, adjusting the virtual lamp in the virtual scene based on the adjustment instruction and the lamp calibration pose, and generating a corresponding lamp adjustment signal;
and based on the lamp adjusting signal, controlling the lamps in the target scene corresponding to the virtual lamp to synchronously adjust.
Another aspect of the present application provides a computer device comprising: a memory, a processor, and a bus system;
wherein the memory is used for storing programs;
the processor is used for executing the program in the memory to realize the method of the aspects;
the bus system is used to connect the memory and the processor to communicate the memory and the processor.
Another aspect of the present application provides a computer-readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the methods of the above aspects.
From the above technical solution, the embodiment of the present application has the following beneficial effects:
the method comprises the steps of distributing a unique corresponding lamp detection identifier for each lamp in a target scene, collecting an identifier detection image corresponding to each lamp detection identifier, further, based on each identifier detection image, obtaining first pose information of each lamp detection identifier in the target scene, converting the first pose information into second pose information corresponding to each lamp detection identifier in a three-dimensional virtual space coordinate system, and calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection identifier to obtain a calibrated lamp calibration pose, and synchronously adjusting the lamps in the target scene based on the lamp calibration pose. According to the method, before lighting the lamps in the target scene in formal use, the unique corresponding lamp detection identifiers can be distributed to each lamp in the target scene, then the real pose information, namely the first pose information, of each lamp detection identifier in the target scene is acquired based on the acquired identifier detection images, then the virtual lamps can be calibrated by converting the real pose information into the second pose information corresponding to each lamp detection identifier in the three-dimensional virtual space coordinate system, namely the virtual lamps are adjusted to be consistent with the pose state of the lamps in the target scene, and synchronous adjustment of the lamps in the target scene and the virtual lamps can be realized based on the lamp calibration pose when lighting the lamps in the target scene in formal use, so that the synchronous effect of the virtual lamps and the real lamps is improved.
Drawings
FIG. 1 is a schematic diagram of an architecture of a luminaire control system according to an embodiment of the present application;
FIG. 2 is a flow chart of one embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 3 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 4 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 5 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 6 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 7 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 8 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 9 is a flow chart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 10 is a flowchart of another embodiment of a luminaire synchronization method in an embodiment of the present application;
FIG. 11 is a schematic flow chart of a lamp synchronization method according to an embodiment of the present application;
FIG. 12 is a schematic diagram of another principle flow of a luminaire synchronization method in an embodiment of the present application;
FIG. 13 is a schematic diagram showing the effect of lamp positions of a lamp synchronization method according to an embodiment of the present disclosure;
FIG. 14 is a schematic view of one embodiment of a lamp synchronization method apparatus in an embodiment of the present application;
FIG. 15 is a schematic diagram of one embodiment of a computer device in an embodiment of the present application.
Detailed Description
The embodiment of the application provides a lamp synchronization method, device, equipment and storage medium, which are used for calibrating a virtual lamp by acquiring real pose information, namely first pose information, of each lamp detection mark in a target scene before formally using the lamp in the target scene for lighting, converting the real pose information into second pose information corresponding to each lamp detection mark in a three-dimensional virtual space coordinate system, namely adjusting the virtual lamp to be consistent with the pose state of the lamp in the target scene, and realizing synchronous adjustment of the lamp in the target scene and the virtual lamp based on the lamp calibration pose when formally using the lamp in the target scene for lighting, thereby improving the synchronous effect of the virtual lamp and the real lamp.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims and drawings, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be capable of operation in sequences other than those illustrated or described herein, for example. Furthermore, the terms "comprises," "comprising," and "includes" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, some terms or concepts related to embodiments of the present application are explained first.
1. Virtual film studio: the virtual studio is a modern digital studio which integrates a green curtain or an LED background wall, a virtual camera system, a space positioning system and a real-time rendering system to work cooperatively, so that the effect of post-preposition is achieved.
2. Camera data: the picture shot by the main camera is shot at the film and television production site and is mainly presented in a video form.
3. Virtual scene data: digital scenes created in the digital content generation tool according to artist demand or real scenes include two-dimensional, three-dimensional, and panoramic content.
4. Camera synchronization data: the position and pose data of the camera in real world space, typically represented by six degrees of freedom (Degree of Freedom,6 DOF), is calculated during film and television production by real-time or off-line means and can be restored in three-dimensional virtual space.
5. Art-Net, sACN: a data distribution protocol allows DMX512 and RDM lighting data to be transmitted over ethernet. It uses a simple UDP-based packet structure, aimed at providing an efficient and low overhead data stream.
6. Digital multiplexing (Digital Multiplex, DMX): DMX512 is a standard for digital communication networks and is commonly used to control lights and effects.
It can be appreciated that in the specific embodiment of the present application, related data such as the two-dimensional code detection image and the first pose information is related, when the above embodiments of the present application are applied to specific products or technologies, user permission or consent needs to be obtained, and the collection, use and processing of related data need to comply with related laws and regulations and standards of related countries and regions.
It should be understood that the lamp synchronization method provided by the application can be applied to various scenes, including but not limited to artificial intelligence, cloud technology, maps, intelligent traffic and the like, and is used for calibrating the virtual lamp by acquiring true pose information of two-dimensional codes corresponding to each real lamp and converting the true pose information into second pose information corresponding to each two-dimensional code under a three-dimensional virtual space coordinate system, so as to complete synchronous control of the virtual lamp on the real lamp, and be applied to scenes such as a virtual film making scene, a stage performance scene, a movie shooting scene, a television show or advertisement shooting scene, a field release meeting and the like.
In order to solve the above problems, the present application proposes a lamp synchronization method, which is applied to a lamp control system shown in fig. 1, referring to fig. 1, fig. 1 is a schematic diagram of an architecture of the lamp control system in an embodiment of the present application, as shown in fig. 1, a unique corresponding lamp detection identifier is allocated to each lamp in a target scene by a server, then an identifier detection image corresponding to each lamp detection identifier acquired by a terminal device is acquired, further, based on each identifier detection image, first pose information of each lamp detection identifier in the target scene is acquired, the first pose information is converted into second pose information corresponding to each lamp detection identifier in a three-dimensional virtual space coordinate system, then calibration can be performed with a virtual lamp in the virtual scene based on the second pose information corresponding to each lamp detection identifier, the lamp model and the lamp information, so as to acquire a calibrated lamp calibration pose, and based on the calibration pose, the lamps in the target scene are synchronously adjusted. According to the method, before lighting the lamps in the target scene in formal use, the unique corresponding lamp detection identifiers can be distributed to each lamp in the target scene, then the real pose information, namely the first pose information, of each lamp detection identifier in the target scene is acquired based on the acquired identifier detection images, then the virtual lamps can be calibrated by converting the real pose information into the second pose information corresponding to each lamp detection identifier in the three-dimensional virtual space coordinate system, namely the virtual lamps are adjusted to be consistent with the pose state of the lamps in the target scene, and synchronous adjustment of the lamps in the target scene and the virtual lamps can be realized based on the lamp calibration pose when lighting the lamps in the target scene in formal use, so that the synchronous effect of the virtual lamps and the real lamps is improved.
It should be understood that only one terminal device is shown in fig. 1, and in an actual scenario, a greater variety of terminal devices may participate in the data processing process, where the terminal devices include, but are not limited to, mobile phones, computers, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, etc., and the specific number and variety are determined by the actual scenario, and the specific number and variety are not limited herein. In addition, one server is shown in fig. 1, but in an actual scenario, there may also be a plurality of servers involved, especially in a scenario of multi-model training interaction, the number of servers depends on the actual scenario, and the present application is not limited thereto.
It should be noted that in this embodiment, the server may be an independent physical server, or may be a server cluster or a distributed system formed by a plurality of physical servers, or may be a cloud server that provides cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, content delivery network (content delivery network, CDN), and basic cloud computing services such as big data and an artificial intelligence platform. The terminal device and the server may be directly or indirectly connected through wired or wireless communication, and the terminal device and the server may be connected to form a blockchain network, which is not limited herein.
With reference to the foregoing description, a description will be made of a lamp synchronization method in the present application, referring to fig. 2, and one embodiment of the lamp synchronization method in the embodiment of the present application includes:
in step S101, a unique corresponding lamp detection identifier is allocated to each lamp in the target scene, where each lamp detection identifier corresponds to a lamp model and lamp information;
in the embodiment of the application, when a target scene requiring lighting of the lamps is available, in order to better acquire the real pose information of each real lamp in the target scene, so that the virtual lamps in the virtual scene can be better calibrated based on the acquired real pose information of each real lamp, the accuracy of calibrating the virtual lamps is improved, and the synchronization effect of the virtual lamps and the real lamps is improved.
The target scene refers to a scene in the real world where various real lamps can be used for lighting, for example, a scene such as shooting of movies (movies, television shows, short videos, etc.), stage lectures, performances, etc.
Each lamp in the target scene has unique lamp model and lamp information, so after a unique corresponding lamp detection identifier (such as a two-dimensional code, a bar code, a special label and the like) is allocated to each lamp in the target scene, it is known that each lamp detection identifier corresponds to one lamp model and lamp information. The lamp information includes information such as lamp name (e.g., spotlight, soft light or light pillar, etc.), lamp description, and lamp power, and lamp model is used to indicate a specific product model of the lamp, e.g., PAR46 down, or S-type lamp, etc.
Specifically, as shown in fig. 11, by acquiring the total number of lamps to be installed in the target scene, an identification set (such as a two-dimensional code set, a bar code set, a special label set, etc.) corresponding to the total number of lamps may be acquired from a pre-constructed identification dictionary (such as a two-dimensional code dictionary, a bar code dictionary, a special label dictionary, etc.), and then an identification set (such as a two-dimensional code set, a bar code set, a special label set, etc.) corresponding to the total number of lamps may be allocated to each lamp in the target scene with a unique corresponding lamp detection identification. For example, assuming that a target scene is a field performance, and the target scene needs 20S-shaped lamps of a manufacturer, 20 different lamp detection identifiers may be obtained from the identifier dictionary to obtain a corresponding identifier set, and 20 different lamp detection identifiers may be encoded, for example, s_0-s_19 may be encoded, and the lamp detection identifiers with ID of 0-19 may be allocated to the 20 lamps.
In step S102, an identification detection image corresponding to each lamp detection identification is collected;
in the embodiment of the application, after each lamp in the target scene is allocated with the unique corresponding lamp detection identifier, the allocated lamp detection identifier can be installed at the lamp in the corresponding target scene, then the lamp detection identifier corresponding to the lamp in the target scene is shot to acquire the identifier detection image corresponding to each lamp detection identifier, so that the real pose information of each lamp detection identifier in the target scene can be accurately acquired based on each identifier detection image.
Specifically, as shown in fig. 11, before lighting the lamps in the target scene in formal use, the allocated lamp detection identifiers (such as two-dimensional codes, bar codes, special labels, etc.) may be installed at the lamps in the corresponding target scene, then, the shooting device (such as a camera) is used to shoot the lamp detection identifiers (such as two-dimensional codes, bar codes, special labels, etc.) installed at the lamps in the target scene, for example, the shooting device (such as a camera) is used to shoot the lamp detection identifiers corresponding to the mark points ID001 corresponding to the class a lamps 001 illustrated in fig. 11, so as to acquire the identifier detection images corresponding to the lamp detection identifiers, and then, the acquired identifier detection images are transferred to software (such as image processing software) of the server through an acquisition card (such as an HDMI acquisition card, etc.), so that the digital signals can be exported into position files of each lamp detection identifier through software (such as image processing software) to accurately acquire the real position information of each lamp detection identifier in the target scene.
In step S103, based on each identification detection image, acquiring first pose information of each luminaire detection identification in a target scene;
in the embodiment of the application, after the identifier detection image corresponding to each lamp detection identifier is obtained, the first pose information of each lamp detection identifier in the target scene can be obtained based on each identifier detection image, so that the first pose information can be converted into the second pose information corresponding to each lamp detection identifier in the three-dimensional virtual space coordinate system, the virtual lamp can be calibrated based on the real pose information, and the accuracy of calibrating the virtual lamp can be improved to a certain extent.
Specifically, after the identifier detection image corresponding to each light fixture detection identifier is obtained, the key point coordinates of the light fixture detection identifier can be obtained through a model based on deep learning, for example, the three-dimensional coordinates of the upper left corner of the light fixture detection identifier (such as a two-dimensional code, a bar code, a special label and the like), or the real coordinate position (such as the position coordinates position (x, y, z) of the light fixture in the target scene, which is the position of the light fixture in the world coordinate system, of the light fixture detection identifier in each identifier detection image can be obtained through a physical segmentation algorithm, an edge detection algorithm or the like, without specific limitation, through other algorithms, namely, the position of the light fixture in the target coordinate system, namely, the first coordinate position corresponding to the light fixture detection identifier can be obtained through a physical segmentation algorithm, namely, the size of the light fixture detection identifier installed at each real light fixture can be calculated according to a visual algorithm and a deep learning model, or other pose algorithms, and the real pose (such as rotation amount orientation (x, y, z, w) represents the rotation pose of the light fixture in the target coordinate system) is calculated, so that the first pose corresponding to the light fixture detection identifier can be detected in the first pose position and the first pose position corresponding to the target coordinate system can be detected in the first pose position and corresponding to the first pose position.
In step S104, converting the first pose information into second pose information corresponding to each lamp detection identifier in the three-dimensional virtual space coordinate system;
in the embodiment of the application, after the first pose information is acquired, the first pose information can be converted into the second pose information corresponding to each lamp detection identifier under the three-dimensional virtual space coordinate system, so that the initial pose information of the virtual lamp in the virtual scene can be calibrated through the second pose information under the same coordinate system, and the accuracy of calibrating the virtual lamp can be improved to a certain extent.
Specifically, a shooting origin coordinate position of a shooting device (e.g., a camera) in a target scene (e.g., a performance stage) is first acquired, and a virtual origin coordinate position of a virtual shooting device (e.g., a virtual camera) in a three-dimensional virtual space coordinate system in a virtual scene constructed based on the target scene is acquired.
Further, after the virtual origin coordinate position and the shooting origin coordinate position are obtained, the first pose information (such as the position of the lamp in the world coordinate system and the rotation pose of the lamp in the world coordinate system) can be converted into the second pose information in the three-dimensional virtual space coordinate system according to a coordinate conversion algorithm based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position and the shooting origin coordinate position.
In step S105, calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection identifier, to obtain a calibrated lamp calibration pose;
in the embodiment of the application, after the second pose information corresponding to each lamp detection identifier is obtained, the initial pose information of the virtual lamp in the virtual scene can be calibrated based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection identifier, so that the calibrated lamp calibration pose is obtained, the subsequent synchronous adjustment of the lamp in the target scene and the virtual lamp can be realized based on the calibrated lamp calibration pose, and therefore the accuracy of calibrating the virtual lamp can be improved to a certain extent, and the synchronous effect of the virtual lamp and the real lamp can be improved.
Wherein, the virtual scene is a digital scene produced in the digital content generating tool, and the virtual world and the real world (namely, target scene) 1 are as follows: 1, wherein the lamps in the target scene and the virtual lamps in the virtual scene are in one-to-one correspondence, and the shooting devices are also in one-to-one correspondence, wherein the target scene corresponds to a real world coordinate system, and the virtual scene corresponds to a three-dimensional virtual space coordinate system. The virtual scene may be specifically represented as a virtual production scene, a three-dimensional stage animation scene, or the like, and may be also represented as other scenes, without specific limitation herein. The virtual film making refers to various digital workflows and methods using computer-aided production and movie visual production, and under the promotion of a real-time rendering technology, applications of the virtual film making can comprise early virtual previewing, virtual character real-time motion capturing, green curtain virtual production, LED virtual production and the like.
It will be appreciated that, since in practical applications (as in the target scene), as shown in fig. 13, the effect of the light effect generated by the difference of the position and the posture of the lamp on the object is very great, it may be appreciated that, for example, the more the light source is converged, the greater the effect of the pose of the lamp on the light effect is presented and the effect of the light source on the lens is great, or the effect of the light shaping effect is also different when the pose of the lamp is changed, as shown in the upper left corner of fig. 13, the light effect of the leg limbs of the performance object may be concentrated, or as shown in the lower left corner of fig. 13, the light effect of the limb above the waist of the performance object may be concentrated, and so on.
Therefore, in order to avoid the situation that the lamps are manually installed and manually measured inaccurately and often have more than ten lamps and tens of lamps working in the same space, so that errors exist in the lamp pose information of the virtual scene and the lamp pose information of the real scene, and the change of the lamp pose in the real scene cannot be synchronously or accurately controlled through the virtual scene, as shown in fig. 12, after the second pose information of each lamp detection identifier in the three-dimensional virtual space coordinate system is obtained, the position of the virtual lamp can be matched with the real lamp (namely, the initial pose information of the virtual lamp is matched with the second pose information of the real lamp), specifically, the initial pose information of the corresponding virtual lamp in the three-dimensional virtual space coordinate system can be obtained according to the lamp model and the lamp information, and then the initial pose information corresponding to the virtual lamp can be calibrated based on the second pose information corresponding to each lamp detection identifier, specifically, the initial pose information of the virtual lamp can be compared with the second pose information of the real lamp in the same three-dimensional virtual space coordinate system to obtain a corresponding comparison result, and then the initial pose information of the virtual lamp can be correspondingly adjusted to the corresponding lamp in the three-dimensional virtual space coordinate system.
For example, suppose that at a goal scenario is a festival evening stage performance, 1:1, before lighting lamps in a formally used target scene, the virtual lamps in the virtual film making scene can be calibrated in advance based on second pose information, lamp models and lamp information corresponding to each lamp detection identifier by distributing a unique corresponding lamp detection identifier to each lamp in the target scene, namely, the virtual lamps are adjusted to be consistent with the pose state of the lamps in the target scene, so that the follow-up lamp calibration pose can be based on, when lighting lamps in the formally used target scene, real-time motion capturing can be carried out on the virtual lamps, and corresponding lamp pose adjustment can be carried out, so that the pose of the real lamps in the target scene can be synchronously controlled and adjusted, and the light and shadow effect required by the target scene can be molded.
In step S106, based on the lamp calibration pose, the lamps in the target scene are synchronously adjusted.
In the embodiment of the application, after the pose of the virtual lamp in the virtual scene is adjusted to the lamp calibration pose, the pose information of the virtual lamp and the pose information of the real lamp can be adjusted to be consistent, so that the lighting of the real lamp of the virtual lamp synchronous control target scene can be realized based on the adjustment of the virtual lamp, and the synchronous effect of the virtual lamp and the real lamp in lighting is improved.
Specifically, after the pose of the virtual lamp in the virtual scene is adjusted to the lamp calibration pose (as illustrated in fig. 11, the pose information of the virtual lamp and the pose information of the real lamp are adjusted to be consistent, that is, the consistency and the synchronicity of the virtual lamp and the real lamp are maintained, then, for example, assuming that a lighting manager can input corresponding lighting operation through a display interface of the virtual scene based on the lighting requirement of the target scene, so that the terminal device can respond to the lighting operation instruction of the lighting manager, generate a corresponding adjustment instruction and send the corresponding adjustment instruction to the server, so that the server can receive the adjustment instruction about the virtual lamp, then, based on the adjustment instruction and the lamp calibration pose, adjust or control the virtual lamp in the virtual scene, generate a corresponding lamp adjustment signal, and further, can transmit the lamp adjustment signal to the corresponding real lamp through a lighting control engine, so as to control or adjust the corresponding lighting or direction adjustment of the lamp in the target scene corresponding to the virtual lamp.
According to the lamp synchronization method, before lighting of the lamps in the target scene is formally used, the unique corresponding lamp detection identifiers can be allocated to each lamp in the target scene, then based on the collected identifier detection images, real pose information, namely first pose information, of each lamp detection identifier in the target scene is obtained, then the virtual lamps can be calibrated by converting the real pose information into second pose information corresponding to each lamp detection identifier in the three-dimensional virtual space coordinate system, namely the virtual lamps are adjusted to be consistent with the pose state of the lamps in the target scene, and synchronous adjustment of the lamps in the target scene and the virtual lamps can be achieved based on lamp calibration poses when lighting of the lamps in the target scene is formally used, so that the synchronous effect of the virtual lamps and the real lamps is improved.
Optionally, based on the embodiment corresponding to fig. 2, in another optional embodiment of the method for synchronizing lamps provided in the embodiment of the present application, as shown in fig. 3, step S105 calibrates with a virtual lamp in a virtual scene based on second pose information, a lamp model and lamp information corresponding to each lamp detection identifier, to obtain a calibrated lamp calibration pose, including:
in step S301, initial pose information corresponding to the virtual lamp is obtained according to the lamp model and the lamp information;
in step S302, based on the second pose information, initial pose information corresponding to the virtual lamp is calibrated, so as to obtain a lamp calibration pose.
In the embodiment of the application, after the second pose information corresponding to each lamp detection identifier is obtained, initial pose information corresponding to the virtual lamp can be obtained according to the lamp model and the lamp information, then, the initial pose information corresponding to the virtual lamp can be calibrated based on the second pose information, so that the lamp calibration pose can be obtained, the subsequent synchronous adjustment of the lamp and the virtual lamp in the target scene can be realized based on the calibrated lamp calibration pose, and therefore the accuracy of calibrating the virtual lamp can be improved to a certain extent, and the synchronous effect of the virtual lamp and the real lamp can be improved.
Specifically, as shown in fig. 12, after the second pose information of each luminaire detection identifier in the three-dimensional virtual space coordinate system is obtained, the real lamplight position matching of the virtual lamplight (that is, the initial pose information of the virtual luminaire is matched with the second pose information of the real luminaire) may be performed, specifically, according to the luminaire model and luminaire information, for example, the class a luminaire 001, the class b luminaire 001, etc., based on the one-to-one correspondence between the real luminaire and the virtual luminaire, the virtual luminaire corresponding to each real luminaire in the virtual scene may be accurately obtained, and the position and the pose of each virtual luminaire may be initialized when the virtual scene is constructed, so as to obtain the initial pose information corresponding to each virtual luminaire.
Further, after the initial pose information corresponding to each real lamp is obtained, the second pose information and the initial pose information corresponding to the virtual lamp can be compared after the initial pose information corresponding to each real lamp is obtained in the three-dimensional virtual space coordinate system, so that a corresponding comparison result is obtained, if the comparison result is consistent, the position and the pose of the virtual lamp relative to the real lamp in the target scene are not offset, that is, calibration is not needed, the initial pose information corresponding to the virtual lamp can be used as the lamp calibration pose corresponding to the virtual lamp, if the comparison result is inconsistent, the position and the pose of the virtual lamp relative to the real lamp in the target scene are offset, that is, calibration is needed, the initial pose information corresponding to the virtual lamp can be replaced with the second pose information, and the second pose information is used as the lamp calibration pose corresponding to the virtual lamp.
Optionally, based on the embodiment corresponding to fig. 3, in another optional embodiment of the lamp synchronization method provided in the embodiment of the present application, as shown in fig. 4, step S302 calibrates initial pose information corresponding to the virtual lamp based on the second pose information, to obtain a lamp calibration pose, including:
in step S401, comparing the second pose information with the initial pose information corresponding to the virtual lamp to obtain a comparison result;
in step S402, if the comparison result is that the information is consistent, the initial pose information corresponding to the virtual lamp is used as the lamp calibration pose corresponding to the virtual lamp;
in step S403, if the comparison result is that the information is inconsistent, the initial pose information corresponding to the virtual lamp is replaced by the second pose information, so as to obtain the lamp calibration pose corresponding to the virtual lamp.
In the embodiment of the application, after the second pose information corresponding to each lamp detection identifier is obtained, initial pose information corresponding to the virtual lamp can be obtained according to the lamp model and the lamp information, then the second pose information and the initial pose information corresponding to the virtual lamp can be compared to obtain a comparison result, if the comparison result is consistent, the initial pose information corresponding to the virtual lamp is used as the lamp calibration pose corresponding to the virtual lamp, if the comparison result is inconsistent, the initial pose information corresponding to the virtual lamp can be replaced with the second pose information to obtain the lamp calibration pose corresponding to the virtual lamp, so that the subsequent synchronous adjustment of the lamp and the virtual lamp in the target scene can be realized based on the calibrated lamp calibration pose, and the accuracy of calibrating the virtual lamp and the synchronous effect of the virtual lamp and the real lamp can be improved to a certain extent.
Specifically, as shown in fig. 12, after the second pose information of each luminaire detection identifier in the three-dimensional virtual space coordinate system is obtained, the real lamplight position matching of the virtual lamplight (that is, the matching of the initial pose information of the virtual luminaire and the second pose information of the real luminaire) may be performed, specifically, according to the luminaire model and luminaire information, for example, the class a luminaire 001, the class b luminaire 001, etc., based on the one-to-one correspondence between the real luminaire and the virtual luminaire, the virtual luminaire (the class a luminaire 001) corresponding to each real luminaire in the virtual scene may be accurately obtained, and the position and the pose of each virtual luminaire may be initialized when the virtual scene is constructed, so as to obtain the initial pose information corresponding to each virtual luminaire.
Further, after the virtual lamp corresponding to each real lamp is obtained, the second pose information and the initial pose information corresponding to the virtual lamp can be compared after the initial pose information in the three-dimensional virtual space coordinate system, so that a corresponding comparison result is obtained.
Further, (as illustrated in fig. 12, restoring the source file effect), for example, if the comparison result is consistent with the information, it may be understood that the position (such as the position of the virtual lamp in the three-dimensional virtual space coordinate system) and the posture (such as the rotation posture of the virtual lamp in the three-dimensional virtual space coordinate system) of the virtual lamp relative to the real lamp in the target scene are not offset, that is, the initial posture information corresponding to the virtual lamp may be used as the lamp calibration posture corresponding to the virtual lamp without calibration.
Otherwise, if the comparison result is inconsistent with the information, it can be understood that the position (such as the position of the virtual lamp under the three-dimensional virtual space coordinate system) and the posture (such as the rotation posture of the virtual lamp under the three-dimensional virtual space coordinate system) of the virtual lamp relative to the real lamp in the target scene are offset, that is, calibration is required, the initial posture information corresponding to the virtual lamp can be replaced or reassigned to the second posture information, so that the second posture information is used as the lamp calibration posture corresponding to the virtual lamp.
Further, after the initial pose information of all the virtual lamps in the virtual scene is calibrated to be the lamp calibration pose, that is, the pose information of the virtual lamps and the pose information of the real lamps are adjusted to be consistent, the subsequent lamp calibration pose based on the virtual lamps can be realized, so that synchronous adjustment of the lamps in the target scene and the virtual lamps is realized.
Optionally, on the basis of the embodiment corresponding to fig. 2, in another optional embodiment of the method for synchronizing a lamp provided in the embodiment of the present application, as shown in fig. 5, the lamp detection identifier includes a two-dimensional code; step S101 assigns a unique corresponding luminaire detection identifier to each luminaire in the target scene, including: step S501 to step S502; and step S102 includes step S503; step S103 includes step S504; step S104 includes step S505;
In step S501, the total number of lamps in the target scene is obtained, and a two-dimensional code set corresponding to the total number of lamps is obtained from a two-dimensional code dictionary;
in step S502, based on the two-dimensional code set, a unique corresponding two-dimensional code and a two-dimensional code identifier corresponding to each two-dimensional code are allocated to each lamp;
in the embodiment of the application, when a target scene requiring lighting of the lamps is available, in order to better acquire the real pose information of each real lamp in the target scene, so that the virtual lamps in the virtual scene can be better calibrated based on the acquired real pose information of each real lamp, the accuracy of calibrating the virtual lamps is improved, and the synchronization effect of the virtual lamps and the real lamps is improved.
Specifically, as shown in fig. 11, before a unique two-dimensional code is allocated to each luminaire in the target scene, a two-dimensional code dictionary (e.g., a two-dimensional code dictionary illustrated in fig. 11, for example, including 1000 different two-dimensional codes) is pre-configured in this embodiment.
Further, for example, assuming that the target scene is a performance and 200 lamps are used in total, that is, the total number of lamps in the target scene, two-dimensional codes with 200 number levels may be randomly acquired from the two-dimensional code dictionary, so as to acquire a two-dimensional code set corresponding to the total number of lamps, and the corresponding two-dimensional code identifier may be represented as ID numbers from 0 to 199.
Further, after the two-dimensional code set is obtained, each lamp can be allocated with a unique corresponding two-dimensional code and a two-dimensional code identifier corresponding to each two-dimensional code. For example, assuming that a target scene is a performance, 200 lamps are used in total, wherein 20S-type lamps of a manufacturer are required to be used, 20 different two-dimensional codes with ID numbers of 0 to 19 can be obtained from a two-dimensional code set, and the two-dimensional code identifiers are allocated to 20S-type lamps, for example, S numbers of s_0 to s_19.
In step S503, a two-dimensional code detection image corresponding to each two-dimensional code is acquired;
in the embodiment of the application, after the unique corresponding two-dimensional code is allocated to each lamp in the target scene, the allocated two-dimensional code can be installed at the lamp in the corresponding target scene, then the two-dimensional code corresponding to the lamp in the target scene is shot to acquire the two-dimensional code detection image corresponding to each two-dimensional code, so that the real pose information of each two-dimensional code in the target scene can be accurately acquired based on each two-dimensional code detection image.
Specifically, as shown in fig. 11, before lighting a lamp in a target scene is formally used, an allocated two-dimensional code may be installed at a lamp in a corresponding target scene, then a photographing device (such as a camera) is used to photograph the two-dimensional code installed at the lamp in the target scene, for example, a photographing device (such as a camera) is used to photograph a two-dimensional code corresponding to a mark point ID001 corresponding to a class a lamp 001 illustrated in fig. 11, so as to acquire a two-dimensional code detection image corresponding to the two-dimensional code, and then an acquisition card (such as an acquisition card illustrated in fig. 11, for example, an HDMI acquisition card or the like) is used to convert the acquired two-dimensional code detection image into a digital signal to be transmitted to software (such as image processing software) of a server, so that the digital signal can be exported into a position file of each two-dimensional code through software (such as image processing software) to accurately acquire real pose information of each two-dimensional code in the target scene.
In step S504, based on each two-dimensional code detection image, first pose information of each two-dimensional code in a target scene is obtained;
in the embodiment of the application, after the two-dimensional code detection image corresponding to each two-dimensional code is acquired, the first pose information of each two-dimensional code in the target scene can be acquired based on each two-dimensional code detection image, so that the first pose information can be converted into the second pose information corresponding to each two-dimensional code in the three-dimensional virtual space coordinate system, the virtual lamp can be calibrated based on the real pose information, and the accuracy of calibrating the virtual lamp can be improved to a certain extent.
Specifically, after the two-dimensional code detection image corresponding to each two-dimensional code is obtained, the coordinates of key points of the two-dimensional code, such as the three-dimensional coordinates of the upper left corner of the two-dimensional code, or the physical segmentation algorithm, the edge detection algorithm, or the like, may be obtained through other algorithms, without specific limitation, so as to obtain the real coordinate position of the two-dimensional code in the target scene (such as the position coordinate position (x, y, z) of the lamp in the target scene), that is, the first coordinate position corresponding to the two-dimensional code, based on the size of the two-dimensional code installed at each real lamp, according to a visual algorithm and a deep learning model, or other pose algorithms, the real pose (such as the rotation amount orientation (x, y, z, w) of the lamp in the target coordinate system) corresponding to each two-dimensional code, that is, so as to obtain the first pose corresponding to the two-dimensional code in the target scene based on the first coordinate position and the first pose of the lamp in the target scene.
In step S505, the first pose information is converted into second pose information corresponding to each two-dimensional code in the three-dimensional virtual space coordinate system;
In the embodiment of the application, after the first pose information is acquired, the first pose information can be converted into the second pose information corresponding to each two-dimensional code under the three-dimensional virtual space coordinate system, so that the initial pose information of the virtual lamp in the virtual scene can be calibrated through the second pose information under the same coordinate system, and the accuracy of calibrating the virtual lamp can be improved to a certain extent.
Specifically, a shooting origin coordinate position of a shooting device (e.g., a camera) in a target scene (e.g., a performance stage) is first acquired, and a virtual origin coordinate position of a virtual shooting device (e.g., a virtual camera) in a three-dimensional virtual space coordinate system in a virtual scene constructed based on the target scene is acquired.
Further, after the virtual origin coordinate position and the shooting origin coordinate position are obtained, the first pose information (such as the position of the lamp in the world coordinate system and the rotation pose of the lamp in the world coordinate system) can be converted into the second pose information in the three-dimensional virtual space coordinate system according to a coordinate conversion algorithm based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position and the shooting origin coordinate position.
Optionally, based on the embodiment corresponding to fig. 5, in another optional embodiment of the luminaire synchronization method provided in the embodiment of the present application, as shown in fig. 6, step S502 assigns a unique corresponding two-dimensional code and a two-dimensional code identifier corresponding to each two-dimensional code to each luminaire based on the two-dimensional code set, including:
in step S601, classifying lamps based on the lamp model and the lamp information to obtain P lamp categories and lamp sets corresponding to each lamp category, wherein P is an integer greater than or equal to 1;
in step S602, the two-dimensional code set is divided into P two-dimensional code subsets based on P luminaire categories and the luminaire set corresponding to each luminaire category;
in step S603, each two-dimensional code in the P two-dimensional subsets and the two-dimensional code identifier corresponding to each two-dimensional code are allocated to the lamps in the P lamp sets one by one.
In the embodiment of the present application, when there is a target scene requiring lighting of a lamp, in order to better obtain the real pose information of each real lamp in the target scene, so that the subsequent virtual lamp in the virtual scene can be better calibrated based on the obtained real pose information of each real lamp, thereby improving the accuracy of calibrating the virtual lamp and the synchronization effect of the virtual lamp and the real lamp.
As shown in fig. 11, before a unique two-dimensional code is allocated to each luminaire in the target scene, a two-dimensional code dictionary (e.g., the two-dimensional code dictionary illustrated in fig. 11, for example, includes 1000 different two-dimensional codes) is pre-configured in this embodiment.
Further, for example, assuming that the target scene is a performance and 200 lamps are used in total, that is, the total number of lamps in the target scene, two-dimensional codes with 200 number levels may be randomly acquired from the two-dimensional code dictionary, so as to acquire a two-dimensional code set corresponding to the total number of lamps, and the corresponding two-dimensional code identifier may be represented as ID numbers from 0 to 199.
Further, the classification of the lamps may be performed based on the lamp model and the lamp information, for example, the target scene is a performance, 200 lamps are used in total, classification is performed according to the manufacturer of the lamp and the model of the lamp, and it is assumed that P (e.g., 10) lamp categories (e.g., a to J lamp) and lamp sets corresponding to each lamp category may be obtained, where each lamp set has at least one lamp.
Further, after the two-dimensional code set is obtained, the two-dimensional code set is divided into P two-dimensional code subsets based on P lamp categories and lamp sets corresponding to each lamp category, for example, the two-dimensional code set is divided into 10 two-dimensional code subsets of a to J categories, for example, assuming that 10 lamps exist in the lamp sets of a type lamp category, then two-dimensional code identifiers such as 10 different two-dimensional codes with ID numbers of 20 to 29 also exist in the corresponding two-dimensional code subsets, then the two-dimensional code identifiers such as 10 different two-dimensional codes with ID numbers of 20 to 29 can be allocated to 10 type a lamps, such as encoding a number of a_20-a_29, so that each two-dimensional code in the P two-dimensional code subsets and the two-dimensional code identifier corresponding to each two-dimensional code can be allocated to lamps in the P type of lamp sets one by one.
Optionally, in another optional embodiment of the luminaire synchronization method provided in the embodiment of the present application, as shown in fig. 7, on the basis of the embodiment corresponding to fig. 5, acquiring, in step S503, a two-dimensional code detection image corresponding to each two-dimensional code includes:
in step S701, installing a two-dimensional code corresponding to each lamp to a corresponding lamp in a target scene to obtain a two-dimensional code detection point corresponding to each lamp;
in step S702, at the time of acquisition, two-dimensional codes of each two-dimensional code detection point in the target scene are shot, and a two-dimensional code detection image corresponding to each two-dimensional code is obtained.
In the embodiment of the application, after a unique corresponding two-dimensional code is allocated to each lamp in a target scene, the two-dimensional code corresponding to each lamp can be installed at the corresponding lamp in the target scene to obtain two-dimensional code detection points corresponding to each lamp, then, at the time of acquisition, the two-dimensional code of each two-dimensional code detection point in the target scene is shot to obtain two-dimensional code detection images corresponding to each two-dimensional code, so that the real pose information of each two-dimensional code in the target scene can be accurately obtained based on each two-dimensional code detection image.
Specifically, as shown in fig. 11, before lighting the lamps in the formal use target scene, the allocated two-dimensional codes may be installed at the lamps in the corresponding target scene, specifically, the two-dimensional code plates corresponding to each two-dimensional code may be customized by using lightweight and non-deformable materials such as acrylic plates, where the sizes of the two-dimensional code plates may be set according to the current use scene, the two-dimensional code plates with corresponding numbers are fixed on the lamplight base, and then the installation of the lamps is completed, so as to obtain two-dimensional code detection points corresponding to each lamp.
Further, at the time of collection (i.e. earlier than the time when the lamps in the target scene are formally used for lighting), a shooting device (such as a camera) can be used for shooting the two-dimensional code of each two-dimensional code detection point in the target scene, for example, a shooting device (such as a camera) is used for shooting the two-dimensional code corresponding to the mark point ID001 corresponding to the A-type lamp 001 illustrated in fig. 11 so as to collect a two-dimensional code detection image corresponding to the two-dimensional code, and then the collected two-dimensional code detection image is converted into a digital signal through a collection card (such as a collection card illustrated in fig. 11, for example, an HDMI collection card) and transmitted to software (such as image processing software) of a server, so that the digital signal can be exported into a position file of each two-dimensional code through the software (such as image processing software) to accurately obtain real pose information of each two-dimensional code in the target scene, namely, a first coordinate position and a first pose of the two-dimensional code under an image coordinate system with the camera as a center.
Optionally, in another optional embodiment of the method for synchronizing a lamp provided in the embodiment of the present application on the basis of the embodiment corresponding to fig. 7, as shown in fig. 8, the first pose information includes a first coordinate position and a first pose; step S504 obtains first pose information of each two-dimensional code in the target scene based on each two-dimensional code detection image, including:
in step S801, a first coordinate position corresponding to a two-dimensional code in each two-dimensional code detection image is obtained;
in step S802, a first posture corresponding to each two-dimensional code is calculated based on the size of the two-dimensional code installed at each two-dimensional code detection point.
In the embodiment of the application, after the two-dimensional code detection image corresponding to each two-dimensional code is obtained, the first coordinate position corresponding to the two-dimensional code in each two-dimensional code detection image can be obtained first, then the first gesture corresponding to each two-dimensional code is calculated based on the size of the two-dimensional code installed at each two-dimensional code detection point, so that the first gesture information corresponding to each two-dimensional code is obtained, the first gesture information can be converted into the second gesture information corresponding to each two-dimensional code under the three-dimensional virtual space coordinate system, and therefore the virtual lamp can be calibrated based on the real gesture information, and the accuracy of calibrating the virtual lamp can be improved to a certain extent.
Specifically, after the two-dimensional code detection image corresponding to each two-dimensional code is obtained, the coordinates of key points of the two-dimensional code, such as the three-dimensional coordinates of the upper left corner of the two-dimensional code, or the physical segmentation algorithm, the edge detection algorithm, or the like, or other algorithms, without specific limitation, can be obtained, so that the real coordinate position (such as the position coordinate position (x, y, z) of the lamp in the target scene, of the two-dimensional code in each two-dimensional code detection image, namely the position of the two-dimensional code in the world coordinate system, namely the position of the two-dimensional code in the image coordinate system centering on the camera, is obtained.
Further, based on the size of the two-dimensional code of each two-dimensional code detection point in the target scene (for example, the size of the two-dimensional code plate corresponding to each two-dimensional code is customized by using lightweight and non-deformable materials such as acrylic plates, etc.), the real gesture corresponding to each two-dimensional code (for example, rotation quantity orientation (x, y, z, w) represents the rotation gesture of the lamp in the world coordinate system, namely, the gesture of the two-dimensional code in the image coordinate system with the camera as the center), namely, the first gesture corresponding to the two-dimensional code, is calculated according to the visual algorithm and the deep learning model, or other gesture algorithms, without specific limitation, so that the first gesture information of each two-dimensional code in the target scene can be obtained based on the first coordinate position and the first gesture corresponding to the two-dimensional code.
Optionally, in another optional embodiment of the method for synchronizing a lamp provided in the embodiment of the present application on the basis of the embodiment corresponding to fig. 8, as shown in fig. 9, the second pose information includes a second coordinate position and a second pose; step S505 converts the first pose information into second pose information corresponding to each two-dimensional code in the three-dimensional virtual space coordinate system, including:
in step S901, a shooting origin coordinate position in a target scene and a virtual origin coordinate position in a three-dimensional virtual space coordinate system are acquired;
in step S902, the first coordinate position and the first posture are converted into the second coordinate position and the second posture based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position, and the shooting origin coordinate position.
In the embodiment of the application, after the first coordinate position and the first gesture are acquired, the shooting origin coordinate position in the target scene and the virtual origin coordinate position in the three-dimensional virtual space coordinate system can be acquired first, and then the first coordinate position and the first gesture are converted into the second coordinate position and the second gesture based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position and the shooting origin coordinate position, so that the initial gesture information of the virtual lamp in the virtual scene can be calibrated through the second gesture information under the same coordinate system, and the accuracy of calibrating the virtual lamp can be improved to a certain extent.
Specifically, a shooting origin coordinate position (i.e., a position under an image coordinate system centered on a camera) of a shooting device (e.g., a camera) in a target scene (e.g., a performance stage) is first acquired, and a virtual origin coordinate position (i.e., a position under a three-dimensional virtual space coordinate system centered on a virtually constructed camera) of a virtual shooting device (e.g., a virtual camera) in a three-dimensional virtual space coordinate system in a virtual scene constructed based on the target scene is acquired.
Further, after the virtual origin coordinate position and the shooting origin coordinate position are obtained, the first pose information (such as the position of the lamp in the world coordinate system and the rotation pose of the lamp in the world coordinate system) may be converted into the second pose information in the three-dimensional virtual space coordinate system according to a coordinate conversion algorithm (such as a translation, rotation, matrix multiplication, and other algorithms) based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position, and the shooting origin coordinate position.
Optionally, based on the embodiment corresponding to fig. 2, in another optional embodiment of the method for synchronizing a lamp provided in the embodiment of the present application, as shown in fig. 10, step S106 includes, based on a lamp calibration pose, performing synchronization adjustment on a lamp in a target scene, including:
In step S1001, if an adjustment instruction of the virtual lamp is received, adjusting the virtual lamp in the virtual scene based on the adjustment instruction and the lamp calibration pose, and generating a corresponding lamp adjustment signal;
in step S1002, based on the lamp adjustment signal, the lamps in the target scene corresponding to the virtual lamp are controlled to perform synchronous adjustment.
In the embodiment of the application, after the pose of the virtual lamp in the virtual scene is adjusted to the lamp calibration pose, pose information of the virtual lamp and pose information of the real lamp can be adjusted to be consistent, if an adjustment instruction of the virtual lamp is received, the virtual lamp in the virtual scene can be adjusted based on the adjustment instruction and the lamp calibration pose, and corresponding lamp adjustment signals are generated, so that the lamps in the target scene corresponding to the virtual lamp can be controlled to be synchronously adjusted based on the lamp adjustment signals, and the lighting of the real lamp of the virtual lamp synchronous control target scene is realized, so that the synchronous effect of the virtual lamp and the real lamp during lighting is improved.
Specifically, as shown in fig. 11, after the pose of the virtual lamp in the virtual scene is adjusted to the lamp calibration pose (the initialization calibration of the pose of the virtual lamp as illustrated in fig. 11), the coordinate positions and the poses of the virtual lamp and the real lamp are adjusted to be consistent, so that the consistency and the synchronism of the virtual lamp and the real lamp can be maintained.
Further, after pose information of each virtual lamp in the virtual scene is matched with pose information of a corresponding real lamp in the target scene, that is, the pose information is consistent, for example, if a lighting manager needs to perform lighting adjustment or rotation on one or more lamps in the target scene based on lighting requirements of the target scene, corresponding lighting operation (for example, adjusting the lighting brightness of the class a lamp 001, such as adjusting the height) can be input on a display interface of the virtual scene in the terminal device, so that the terminal device can respond to the lighting operation of the lighting manager, and the generated adjustment instruction (adjusting the lighting brightness of the class a lamp 001) can be sent to the server.
Further, when the server may receive an adjustment instruction (adjust up the light brightness of the virtual type a light 001) about the virtual light, the server may determine, based on the adjustment instruction, a virtual light (e.g. virtual type a light 001) corresponding to the calibrated light calibration pose in the virtual scene, then may adjust or control the virtual light (e.g. virtual type a light 001) corresponding to the calibrated light calibration pose based on the adjustment instruction (adjust up the light brightness of the virtual type a light 001), and may generate a light adjustment signal (e.g. engine sending signal or DMX signal, etc. adjust up the light brightness of the type a light 001) corresponding to the real light based on the consistency and synchronization of the virtual light and the real light, and then may transmit the light adjustment signal (e.g. engine sending signal or DMX signal, etc. adjust up the light brightness of the type a light 001) to the corresponding real light in the target scene so as to synchronize or adjust the lighting effect of the real light (e.g. to the light brightness of the virtual light 001 in the virtual type a light scene) through the real engine control console.
Referring to fig. 14, fig. 14 is a schematic diagram illustrating an embodiment of a lamp synchronization method apparatus according to an embodiment of the present application, and the lamp synchronization method apparatus 20 includes:
the processing unit 201 is configured to allocate a unique corresponding lamp detection identifier to each lamp in the target scene, where each lamp detection identifier corresponds to one lamp model and lamp information;
an acquiring unit 202, configured to acquire an identifier detection image corresponding to each luminaire detection identifier;
the obtaining unit 202 is further configured to obtain first pose information of each luminaire detection identifier in the target scene based on each identifier detection image;
the processing unit 201 is further configured to convert the first pose information into second pose information corresponding to each lamp detection identifier in the three-dimensional virtual space coordinate system;
the processing unit 201 is further configured to calibrate with the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection identifier, so as to obtain a calibrated lamp calibration pose;
and the control unit 203 is used for synchronously adjusting the lamps in the target scene based on the lamp calibration pose.
Optionally, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application based on the embodiment corresponding to fig. 14, the processing unit 201 may specifically be configured to:
according to the lamp model and the lamp information, initial pose information corresponding to the virtual lamp is obtained;
and based on the second pose information, calibrating the initial pose information corresponding to the virtual lamp to obtain the lamp calibration pose.
Optionally, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application based on the embodiment corresponding to fig. 14, the processing unit 201 may specifically be configured to:
comparing the second pose information with initial pose information corresponding to the virtual lamp to obtain a comparison result;
if the comparison result is that the information is consistent, the initial pose information corresponding to the virtual lamp is used as the lamp calibration pose corresponding to the virtual lamp;
and if the comparison result is that the information is inconsistent, replacing the initial pose information corresponding to the virtual lamp with the second pose information to obtain the lamp calibration pose corresponding to the virtual lamp.
Optionally, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application based on the embodiment corresponding to fig. 14, the processing unit 201 may specifically be configured to:
Acquiring the total number of lamps in a target scene, and acquiring a two-dimensional code set corresponding to the total number of lamps from a two-dimensional code dictionary;
based on the two-dimensional code set, distributing a unique corresponding two-dimensional code and a two-dimensional code identifier corresponding to each two-dimensional code to each lamp;
the acquisition unit 202 may specifically be configured to: collecting two-dimensional code detection images corresponding to each two-dimensional code;
the acquisition unit 202 may specifically be configured to: acquiring first pose information of each two-dimensional code in the target scene based on each two-dimensional code detection image;
the processing unit 201 may be specifically configured to: and converting the first pose information into second pose information corresponding to each two-dimensional code in a three-dimensional virtual space coordinate system.
Optionally, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application based on the embodiment corresponding to fig. 14, the processing unit 201 may specifically be configured to:
classifying lamps based on lamp models and lamp information to obtain P lamp categories and lamp sets corresponding to each lamp category, wherein P is an integer greater than or equal to 1;
dividing the two-dimensional code set into P two-dimensional code subsets based on P lamp categories and lamp sets corresponding to each lamp category;
And distributing each two-dimensional code in the P two-dimensional subsets and the two-dimensional code identifier corresponding to each two-dimensional code to lamps in the P lamp sets one by one.
Alternatively, based on the embodiment corresponding to fig. 14, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application, the obtaining unit 202 may specifically be configured to:
installing the two-dimensional code corresponding to each lamp to the corresponding lamp in the target scene to obtain a two-dimensional code detection point corresponding to each lamp;
and shooting the two-dimensional code of each two-dimensional code detection point in the target scene at the acquisition moment to obtain a two-dimensional code detection image corresponding to each two-dimensional code.
Alternatively, based on the embodiment corresponding to fig. 14, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application, the obtaining unit 202 may specifically be configured to:
acquiring a first coordinate position corresponding to the two-dimensional code in each two-dimensional code detection image;
and calculating a first gesture corresponding to each two-dimensional code based on the size of the two-dimensional code installed at each two-dimensional code detection point.
Optionally, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application based on the embodiment corresponding to fig. 14, the processing unit 201 may specifically be configured to:
Acquiring a shooting origin coordinate position in a target scene and a virtual origin coordinate position in a three-dimensional virtual space coordinate system;
the first coordinate position and the first pose are converted into the second coordinate position and the second pose based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position, and the photographing origin coordinate position.
Optionally, on the basis of the embodiment corresponding to fig. 14, in another embodiment of the luminaire synchronization device provided in the embodiment of the present application, the control unit 203 may specifically be configured to:
if an adjustment instruction of the virtual lamp is received, adjusting the virtual lamp in the virtual scene based on the adjustment instruction and the lamp calibration pose, and generating a corresponding lamp adjustment signal;
and based on the lamp adjusting signal, controlling the lamps in the target scene corresponding to the virtual lamp to synchronously adjust.
Another aspect of the present application provides another schematic diagram of a computer device, as shown in fig. 15, where fig. 15 is a schematic diagram of a computer device structure provided in an embodiment of the present application, where the computer device 300 may have a relatively large difference due to configuration or performance, and may include one or more central processing units (central processing units, CPU) 310 (e.g., one or more processors) and a memory 320, and one or more storage media 330 (e.g., one or more mass storage devices) storing application programs 331 or data 332. Wherein memory 320 and storage medium 330 may be transitory or persistent storage. The program stored on the storage medium 330 may include one or more modules (not shown), each of which may include a series of instruction operations in the computer device 300. Still further, the central processor 310 may be configured to communicate with the storage medium 330 and execute a series of instruction operations in the storage medium 330 on the computer device 300.
The computer device 300 may also include one or more power supplies 340, one or more wired or wireless network interfaces 350, one or more input/output interfaces 360, and/or one or more operating systems 333, such as Windows Server TM ,Mac OS X TM ,Unix TM ,Linux TM ,FreeBSD TM Etc.
The computer device 300 described above is also used to perform the steps in the corresponding embodiments as in fig. 2 to 10.
Another aspect of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements steps in a method as described in the embodiments shown in fig. 2 to 10.
Another aspect of the present application provides a computer program product comprising a computer program which, when executed by a processor, implements steps in a method as described in the embodiments shown in fig. 2 to 10.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
In the several embodiments provided in this application, it should be understood that the disclosed systems, apparatuses, and methods may be implemented in other ways. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of the units is merely a logical function division, and there may be additional divisions when actually implemented, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (13)

1. A method of luminaire synchronization, comprising:
assigning a unique corresponding lamp detection identifier to each lamp in the target scene, wherein each lamp detection identifier corresponds to one lamp model and lamp information;
acquiring an identification detection image corresponding to each lamp detection identification;
acquiring first pose information of each lamp detection mark in the target scene based on each mark detection image;
converting the first pose information into second pose information corresponding to each lamp detection mark in a three-dimensional virtual space coordinate system;
calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection mark to obtain a calibrated lamp calibration pose;
and based on the lamp calibration pose, synchronously adjusting the lamps in the target scene.
2. The method of claim 1, wherein calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model, and the lamp information corresponding to each lamp detection identifier to obtain a calibrated lamp calibration pose comprises:
Acquiring initial pose information corresponding to the virtual lamp according to the lamp model and the lamp information;
and based on the second pose information, calibrating the initial pose information corresponding to the virtual lamp to obtain the lamp calibration pose.
3. The method of claim 2, wherein calibrating the initial pose information corresponding to the virtual luminaire based on the second pose information to obtain the luminaire calibration pose comprises:
comparing the second pose information with the initial pose information corresponding to the virtual lamp to obtain a comparison result;
if the comparison result is that the information is consistent, the initial pose information corresponding to the virtual lamp is used as the lamp calibration pose corresponding to the virtual lamp;
and if the comparison result is that the information is inconsistent, replacing the initial pose information corresponding to the virtual lamp with the second pose information to obtain the lamp calibration pose corresponding to the virtual lamp.
4. The method of claim 1, wherein the luminaire detection identifiers comprise two-dimensional codes and two-dimensional code identifiers corresponding to each two-dimensional code; the allocating a unique corresponding lamp detection identifier to each lamp in the target scene includes:
Acquiring the total number of lamps in the target scene, and acquiring a two-dimensional code set corresponding to the total number of lamps from a two-dimensional code dictionary;
based on the two-dimensional code set, distributing the unique corresponding two-dimensional codes and two-dimensional code identifiers corresponding to the two-dimensional codes for each lamp;
the collecting the identification detection image corresponding to each lamp detection identification comprises the following steps:
collecting two-dimensional code detection images corresponding to each two-dimensional code;
the step of obtaining the first pose information of each lamp detection mark in the target scene based on each mark detection image comprises the following steps:
acquiring first pose information of each two-dimensional code in the target scene based on each two-dimensional code detection image;
converting the first pose information into second pose information corresponding to each lamp detection identifier in a three-dimensional virtual space coordinate system, wherein the second pose information comprises:
and converting the first pose information into second pose information corresponding to each two-dimensional code in a three-dimensional virtual space coordinate system.
5. The method of claim 4, wherein the assigning, based on the two-dimensional code set, a unique corresponding two-dimensional code and a two-dimensional code identifier corresponding to each two-dimensional code to each luminaire comprises:
Classifying lamps based on the lamp models and the lamp information to obtain P lamp categories and lamp sets corresponding to each lamp category, wherein P is an integer greater than or equal to 1;
dividing the two-dimensional code set into P two-dimensional code subsets based on P lamp categories and lamp sets corresponding to each lamp category;
and distributing each two-dimensional code in the P two-dimensional code subsets and two-dimensional code identifiers corresponding to each two-dimensional code to lamps in the P lamp sets one by one.
6. The method of claim 4, wherein the acquiring the two-dimensional code detection image corresponding to each two-dimensional code comprises:
installing the two-dimensional code corresponding to each lamp to the corresponding lamp in the target scene to obtain a two-dimensional code detection point corresponding to each lamp;
and shooting the two-dimensional code of each two-dimensional code detection point in the target scene at the acquisition moment to obtain a two-dimensional code detection image corresponding to each two-dimensional code.
7. The method of claim 6, wherein the first pose information comprises a first coordinate location and a first pose; based on each two-dimensional code detection image, acquiring first pose information of each two-dimensional code in the target scene comprises the following steps:
Acquiring the first coordinate position corresponding to the two-dimensional code in each two-dimensional code detection image;
and calculating the first gesture corresponding to each two-dimensional code based on the size of the two-dimensional code installed at each two-dimensional code detection point.
8. The method of claim 7, wherein the second pose information includes a second coordinate location and a second pose; the converting the first pose information into second pose information corresponding to each two-dimensional code in a three-dimensional virtual space coordinate system includes:
acquiring a shooting origin coordinate position in the target scene and a virtual origin coordinate position in the three-dimensional virtual space coordinate system;
and converting the first coordinate position into the second coordinate position and the second posture in the first posture based on the three-dimensional virtual space coordinate system, the virtual origin coordinate position and the shooting origin coordinate position.
9. The method of claim 1, wherein the synchronously adjusting the luminaires in the target scene based on the luminaire calibration pose comprises:
if an adjustment instruction of the virtual lamp is received, adjusting the virtual lamp in the virtual scene based on the adjustment instruction and the lamp calibration pose, and generating a corresponding lamp adjustment signal;
And controlling the lamps in the target scene corresponding to the virtual lamps to synchronously adjust based on the lamp adjustment signals.
10. A luminaire synchronization method apparatus, comprising:
the processing unit is used for distributing a unique corresponding lamp detection identifier to each lamp in the target scene, wherein each lamp detection identifier corresponds to one lamp model and lamp information;
the acquisition unit is used for acquiring an identification detection image corresponding to each lamp detection identification;
the acquisition unit is further used for acquiring first pose information of each lamp detection mark in the target scene based on each mark detection image;
the processing unit is further used for converting the first pose information into second pose information corresponding to each lamp detection identifier in a three-dimensional virtual space coordinate system;
the processing unit is further used for calibrating the virtual lamp in the virtual scene based on the second pose information, the lamp model and the lamp information corresponding to each lamp detection identifier to obtain a calibrated lamp calibration pose;
and the control unit is used for synchronously adjusting the lamps in the target scene based on the lamp calibration pose.
11. A computer device comprising a memory, a processor and a bus system, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 9 when executing the computer program;
the bus system is used for connecting the memory and the processor so as to enable the memory and the processor to communicate.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
13. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any one of claims 1 to 9.
CN202310002163.2A 2023-01-03 2023-01-03 Lamp synchronization method, device, equipment and storage medium Pending CN116485886A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310002163.2A CN116485886A (en) 2023-01-03 2023-01-03 Lamp synchronization method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310002163.2A CN116485886A (en) 2023-01-03 2023-01-03 Lamp synchronization method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116485886A true CN116485886A (en) 2023-07-25

Family

ID=87210753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310002163.2A Pending CN116485886A (en) 2023-01-03 2023-01-03 Lamp synchronization method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116485886A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117412449A (en) * 2023-12-13 2024-01-16 深圳市千岩科技有限公司 Atmosphere lamp equipment, lamp effect playing control method thereof, and corresponding device and medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117412449A (en) * 2023-12-13 2024-01-16 深圳市千岩科技有限公司 Atmosphere lamp equipment, lamp effect playing control method thereof, and corresponding device and medium
CN117412449B (en) * 2023-12-13 2024-03-01 深圳市千岩科技有限公司 Atmosphere lamp equipment, lamp effect playing control method thereof, and corresponding device and medium

Similar Documents

Publication Publication Date Title
US11425802B2 (en) Lighting system and method
WO2007099318A1 (en) Method and apparatus for signal presentation
US20160134851A1 (en) Method and System for Projector Calibration
US10421012B2 (en) System and method for tracking using multiple slave servers and a master server
EP3045017B1 (en) External control lighting systems based on third party content
CN107968901B (en) Lighting system and method for simulating natural environment
CN116485886A (en) Lamp synchronization method, device, equipment and storage medium
CN107016718B (en) Scene rendering method and device
CN106125491A (en) Many optical projection systems
US11765306B2 (en) Distributed command execution in multi-location studio environments
CN106909908B (en) Film scene simulation method, device and system
US20200257831A1 (en) Led lighting simulation system
WO2019098421A1 (en) Object reconstruction device using motion information and object reconstruction method using same
CN101485233A (en) Method and apparatus for signal presentation
JP6730787B2 (en) Projection device
CN116485704A (en) Illumination information processing method and device, electronic equipment and storage medium
CN115861502A (en) Weather rendering method and device in virtual environment, storage medium and electronic equipment
WO2016161486A1 (en) A controller for and a method for controlling a lighting system having at least one light source
CN104301702A (en) Color temperature value processing method and device and camera capable of adjusting color temperature
US10110865B2 (en) Lighting device, lighting system, and program
CN111885794A (en) Light control system and light control method
CN109548235B (en) Synchronous control method and system for computer moving head lamp
CN105103048A (en) Performance system with multi-projection environment
KR102677114B1 (en) Lighting matching system for real and virtual environments based on in-camera visual effects
CN110493540A (en) A kind of scene dynamics illumination real-time collecting method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40090353

Country of ref document: HK