CN115348709A - Smart cloud service lighting display method and system suitable for text travel - Google Patents

Smart cloud service lighting display method and system suitable for text travel Download PDF

Info

Publication number
CN115348709A
CN115348709A CN202211270389.2A CN202211270389A CN115348709A CN 115348709 A CN115348709 A CN 115348709A CN 202211270389 A CN202211270389 A CN 202211270389A CN 115348709 A CN115348709 A CN 115348709A
Authority
CN
China
Prior art keywords
image
pixel
pixel point
adjusted
adjustment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211270389.2A
Other languages
Chinese (zh)
Other versions
CN115348709B (en
Inventor
包珊陌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Liangye Technology Group Co ltd
Original Assignee
Liangye Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Liangye Technology Group Co ltd filed Critical Liangye Technology Group Co ltd
Priority to CN202211270389.2A priority Critical patent/CN115348709B/en
Publication of CN115348709A publication Critical patent/CN115348709A/en
Application granted granted Critical
Publication of CN115348709B publication Critical patent/CN115348709B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a smart cloud service illumination display method and a smart cloud service illumination display system suitable for text travel, which comprise the following steps: the cloud server determines a first projection lamp arranged at a first position, and randomly determines a text travel image in a first text travel image set as a first text travel image; randomly determining a preset number of first collected images corresponding to the first travel images in the first collected image set; carrying out binarization processing on the face subimage to obtain a first binarized image, and carrying out asynchronous processing on the gray level of black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjusted image; adjusting the pixel values of black pixel points with different gray levels in the first adjusted image according to the first pixel value to obtain a second adjusted image; after the size of the second adjustment image is adjusted, fusing the second adjustment image with the first text travel image to obtain a second text travel image; and the first projection lamp is used for carrying out illumination display on the first position according to the second travel image.

Description

Smart cloud service lighting display method and system suitable for text travel
Technical Field
The invention relates to the technical field of data processing, in particular to a smart cloud service illumination display method and system suitable for text travel.
Background
The illumination may be provided in a variety of ways, such as by a projection lamp. When the projection lamp is used for illumination, different required images can be projected based on the projection lamp according to requirements. In the construction process of theme language travel park, can project the image that corresponding theme park corresponded through the projection lamp, but in the prior art, can not combine the projection lamp to carry out pertinence, interactive wisdom illumination show according to the difference of tourist.
Disclosure of Invention
The embodiment of the invention provides a smart cloud service illumination display method and system suitable for text travel, which can be used for collecting images of tourists at different positions, fusing the images of the tourists and the text travel images of corresponding theme parks for illumination, enabling a projection lamp to have the pertinence of a theme when illuminating, interacting with the tourists, combining the images of the tourists and the text travel theme and then projecting, and have high intelligence.
In a first aspect of the embodiments of the present invention, a smart cloud service lighting display method suitable for a travel is provided, including:
the cloud server determines a first projection lamp arranged at a first position, determines a first text travel image set by the first projection lamp according to a position sequence number of the first position, and randomly determines a text travel image in the first text travel image set as a first text travel image;
determining a first image acquisition device arranged at a first position, acquiring a first acquisition image set acquired by the first image acquisition device within a preset time period, and randomly determining a preset number of first acquisition images corresponding to first travel images in the first acquisition image set;
recognizing the face contour in the first collected image to obtain a face subimage, performing binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and performing asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjusted image;
acquiring a first pixel value of a pixel point in the first text image, and adjusting the pixel value of a black pixel point with different gray levels in the first adjustment image according to the first pixel value to obtain a second adjustment image;
determining the size of a reserved image of an image reserved position in the first text image, adjusting the size of the second adjusted image, and fusing the second adjusted image and the first text image to obtain a second text image;
the first projection lamp is used for carrying out illumination display on the first position according to the second text travel image.
Optionally, in a possible implementation manner of the first aspect, the determining, by the cloud server, a first projection lamp set at a first location, determining a first travel image set by the first projection lamp according to a location serial number of the first location, and randomly determining, as the first travel image, one travel image in the first travel image set includes:
after the first projection lamp judges that the time for displaying the current travel chart is greater than or equal to first preset time, an image display request is generated, and the first projection lamp sends the image display request and the first position where the image display request is located to a cloud server;
the cloud server determines a first projection lamp arranged at a first position after receiving the image display request, and determines a first travel image set arranged by the first projection lamp according to the position serial number of the first position, wherein each position serial number is provided with a first travel image set corresponding to the position serial number;
and deleting the second text travel image displayed by the current text travel image from the first text travel image set, and randomly selecting one text travel image from the first text travel image set as the first text travel image.
Optionally, in a possible implementation manner of the first aspect, the recognizing a face contour in the first collected image to obtain a face subimage, performing binarization processing on the face subimage to obtain a first binarized image with only white pixel points and black pixel points, and performing asynchronous processing on the gray level of the black pixel points according to a portion of the face corresponding to each black pixel point to obtain a first adjusted image includes:
identifying a face contour in the first collected image based on OpeanCV to obtain a face subimage, obtaining a pixel value of each pixel point in the face subimage, converting the pixel points in a preset pixel interval into black pixel points, and converting the pixel points in a non-preset pixel interval into white pixel points to obtain a first binary image;
detecting each part in the face subimage based on a Haar cascade detector, wherein the part comprises any one or more of a face, eyes, a nose, a mouth and ears, and determining pixel points corresponding to each part in the face subimage to obtain a pixel point set corresponding to each part;
and asynchronously adjusting the gray level of the corresponding pixel point according to the part corresponding to each pixel point set to obtain the adjusted pixel value of each pixel point set, and taking an image formed by the adjusted pixel value of each pixel point set as a first adjusted image.
Optionally, in a possible implementation manner of the first aspect, the asynchronously adjusting the gray scale of the corresponding pixel point according to the portion corresponding to each pixel point set to obtain the adjusted pixel value of each pixel point set, and taking an image formed by the adjusted pixel value of each pixel point set as the first adjusted image includes:
determining the gray coefficient value of the part corresponding to each pixel point set, and adjusting the gray of the corresponding pixel point according to the gray coefficient value to obtain the adjusted pixel value of each pixel point set;
taking the number of pixels in the pixel point set corresponding to the face as a first number, taking the number of pixels in the pixel point set corresponding to other parts except the face as a second number, and calculating according to the second number and the first number to obtain the pixel ratio of each part;
if the pixel proportion of any part is judged to be larger than the preset maximum proportion or smaller than the preset minimum proportion, the number of the pixels in the pixel set corresponding to the corresponding part is adjusted to obtain an adjusted pixel set, and a first adjusted image is obtained according to the adjusted pixel set.
Optionally, in a possible implementation manner of the first aspect, if it is determined that the pixel proportion of any one of the portions is greater than the preset maximum proportion or less than the preset minimum proportion, adjusting the number of pixels in the pixel set corresponding to the corresponding portion to obtain an adjusted pixel set, and obtaining a first adjusted image according to the adjusted pixel set, where the adjusting includes:
if the pixel proportion of any part is judged to be larger than the preset maximum proportion, calculating according to the pixel proportion of the part and the preset maximum proportion to obtain a pixel point adjusting coefficient;
calculating according to the pixel point adjustment coefficients and the number of pixel points in the pixel point set corresponding to the corresponding parts to obtain the increased and adjusted pixel point adjustment number, and deleting partial edge pixel points in the pixel point set according to the pixel point adjustment number to obtain a deleted and adjusted pixel point set;
if the pixel proportion of any part is judged to be smaller than the preset maximum proportion, calculating according to the pixel proportion of the part and the preset maximum proportion to obtain a pixel point adjusting coefficient;
and calculating the number of the pixels in the pixel set corresponding to the corresponding part according to the pixel point adjustment coefficient to obtain the adjustment number of the pixels with reduced adjustment, locking part of edge pixels in the pixel set according to the adjustment number of the pixels, and classifying other pixels adjacent to the edge pixels into the pixel set to obtain the pixel set with increased adjustment.
Optionally, in a possible implementation manner of the first aspect, the calculating, according to the pixel point adjustment coefficient and the number of the pixel points in the pixel point set corresponding to the corresponding portion, to obtain the adjusted number of the pixel points that is reduced, and deleting, according to the adjusted number of the pixel points, a part of edge pixel points in the pixel point set, to obtain the pixel point set after deletion adjustment, includes:
the adjustment number of the pixel points to reduce the adjustment is calculated by the following formula,
Figure 520901DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE002
pixel tone for reduced adjustmentThe integral coefficient is the sum of the coefficients of,
Figure 100002_DEST_PATH_IMAGE003
is as follows
Figure 100002_DEST_PATH_IMAGE004
The adjustment number of the pixel points of the increased adjustment of the corresponding part,
Figure 100002_DEST_PATH_IMAGE005
is as follows
Figure 47828DEST_PATH_IMAGE004
A second number of pixels in the set of pixels for the corresponding location,
Figure 100002_DEST_PATH_IMAGE006
in the form of a first number of bits,
Figure 100002_DEST_PATH_IMAGE007
is composed of
Figure 437352DEST_PATH_IMAGE004
The weight value of the corresponding position is set,
Figure 100002_DEST_PATH_IMAGE008
is a preset maximum ratio;
determining edge pixel points in the pixel point set according to the position relation of all the pixel points in the pixel point set, and deleting all the edge pixel points in the pixel point set if the number of the edge pixel points is judged to be more than or equal to the adjusted number of the pixel points to obtain a deleted and adjusted pixel point set;
if the number of the edge pixel points is judged to be smaller than the pixel point adjustment number, determining the ratio of the pixel point adjustment number to the edge pixel point number to obtain a first deletion layer value, and rounding the first deletion layer value;
deleting all edge pixels in the pixel point set to obtain a deleted and adjusted pixel point set, and determining edge pixels in the adjusted pixel point set again to enable the determination times of the edge pixels in the pixel point set to be synchronous with the first deletion layer number.
Optionally, in a possible implementation manner of the first aspect, the number of pixels in a pixel set corresponding to the corresponding portion is calculated according to the pixel adjustment coefficient, so as to obtain a reduced adjusted pixel adjustment number, a part of edge pixels in the pixel set is locked according to the pixel adjustment number, and other pixels adjacent to the edge pixels are classified into the pixel set, so as to obtain an increased adjusted pixel set, including:
the adjustment number of the pixel points for increasing the adjustment is calculated by the following formula,
Figure 100002_DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure 100002_DEST_PATH_IMAGE010
in order to increase the number of pixel point adjustments to be adjusted,
Figure 100002_DEST_PATH_IMAGE011
is as follows
Figure 100002_DEST_PATH_IMAGE012
The number of pixel points of each corresponding part which are reduced and adjusted,
Figure 439419DEST_PATH_IMAGE011
is as follows
Figure 50529DEST_PATH_IMAGE012
A second number of pixels in the set of pixels for the corresponding location,
Figure 279516DEST_PATH_IMAGE006
in the form of a first number of bits,
Figure 100002_DEST_PATH_IMAGE013
is composed of
Figure 812129DEST_PATH_IMAGE012
The weight value of the corresponding position is set,
Figure 100002_DEST_PATH_IMAGE014
is a preset minimum occupation ratio;
determining edge pixel points in the pixel point set according to the position relation of all the pixel points in the pixel point set, and determining all the pixel points of the face adjacent to each edge pixel point as pixel points to be added;
counting the number of all the pixels to be increased to obtain a first increased number, and classifying all the pixels to be increased into a pixel point set if the first increased number is more than or equal to the pixel point adjustment number;
if the first increased number is smaller than the pixel point adjusted number, determining the ratio of the pixel point adjusted number to the number of the pixel points to be increased to obtain a first increased layer number, and rounding the first increased layer number;
classifying all the pixels to be increased into the pixel point set to obtain a first increased and increased pixel point set, and determining the edge pixel points and the pixels to be increased in the adjusted pixel point set again to synchronize the determination times of the edge pixel points in the pixel point set with the first increased layer number.
Optionally, in a possible implementation manner of the first aspect, the edge pixel point is determined by:
and acquiring the pixel value of the adjacent pixel point of each pixel point in the pixel point set, and if the pixel points in contact with the pixel points of other parts exist, determining the corresponding pixel points as edge pixel points.
Optionally, in a possible implementation manner of the first aspect, the obtaining a first pixel value of a pixel point in the first document image, and adjusting a pixel value of a black pixel point with different gray levels in a first adjustment image according to the first pixel value to obtain a second adjustment image includes:
selecting a second pixel value interval corresponding to the first pixel value;
determining a plurality of second sub-pixel values respectively corresponding to the second pixel value intervals according to the pixel values of the black pixel points with different gray levels in the first adjustment image;
and replacing and adjusting the pixel values of the black pixel points with different gray levels in the first adjusted image according to the second sub-pixel value to obtain a second adjusted image.
Optionally, in a possible implementation manner of the first aspect, the determining a size of a reserved image of an image reserved bit in the first travel image, after adjusting a size of the second adjusted image, fusing the second adjusted image and the first travel image to obtain a second travel image includes:
comparing the size of the reserved image with the size of the second adjustment image to obtain a first adjustment proportion;
adjusting the size of the second adjustment image based on the first adjustment proportion, and determining a central pixel point of the adjusted second adjustment image as a first reference point;
and determining a central pixel point of an image reserved position as a second reference point, and overlapping the first reference point and the second reference point so as to fuse a second adjustment image and the first travel image to obtain a second travel image.
In a second aspect of the embodiments of the present invention, a smart cloud service lighting display system suitable for text travel is provided, including:
the cloud server comprises a determining module, a processing module and a processing module, wherein the determining module is used for enabling the cloud server to determine a first projection lamp arranged at a first position, determining a first text image set arranged by the first projection lamp according to a position serial number of the first position, and randomly determining a text image in the first text image set as a first text image;
the acquisition module is used for determining a first image acquisition device arranged at a first position, acquiring a first acquisition image set acquired by the first image acquisition device within a preset time period, and randomly determining a preset number of first acquisition images corresponding to first travel images in the first acquisition image set;
the identification module is used for identifying the face contour in the first collected image to obtain a face subimage, carrying out binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and carrying out asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjusted image;
the adjusting module is used for acquiring a first pixel value of a pixel point in the first text image, and adjusting the pixel value of a black pixel point with different gray levels in the first adjusting image according to the first pixel value to obtain a second adjusting image;
the fusion module is used for determining the size of a reserved image of an image reserved position in the first travel image, adjusting the size of the second adjusted image, and fusing the second adjusted image and the first travel image to obtain a second travel image;
and the display module is used for enabling the first projection lamp to carry out illumination display on the first position according to the second text travel image.
In a third aspect of the embodiments of the present invention, a storage medium is provided, in which a computer program is stored, which, when being executed by a processor, is adapted to implement the method according to the first aspect of the present invention and various possible designs of the first aspect of the present invention.
The invention provides a smart cloud service lighting display method and system suitable for text travel. Different text travel images can be set for different positions, and corresponding face images of tourists are extracted according to the playing and walking of the tourists at different positions. The invention can combine the preset text image with the dynamic human face image of the tourist to obtain the corresponding fusion image, so that the invention can collect the image of the tourist when the projection lamp is used for illumination, realizes the interaction between the illumination equipment and the tourist, has stronger entertainment while playing the illumination purpose, improves the intelligent degree of the corresponding theme park, attracts the tourist and the playful and improves the flow of people in the corresponding theme park.
When the images are fused, the method can calculate the proportion between the face and different parts, and set different gray levels according to the difference of the parts, so that the parts corresponding to corresponding tourists in the fused images have certain difference. Moreover, the invention can play a certain beautifying effect, namely when the proportion of each part of the tourist to the face is larger or smaller, the invention can change the size of the corresponding part to a certain extent, so that the proportion of each part of the tourist to the face is closer to the golden proportion compared with that before beautifying, and the face image has more aesthetic feeling in the fused image.
According to the invention, when the corresponding part is increased or reduced, the edge pixel points are determined, and the pixel points needing to be deleted or the added pixel points are determined by combining the edge pixel points, so that the outline of different parts can be adjusted, and the parts with the corresponding outline size are more in proportion to the size of the face and are more attractive.
Drawings
Fig. 1 is a schematic view of an application scenario of the technical solution provided in the present invention;
FIG. 2 is a flowchart of a first embodiment of a smart cloud service lighting display method for a travel;
FIG. 3 is a flowchart of a second embodiment of a smart cloud service lighting display method for a travel;
fig. 4 is a block diagram of a first embodiment of a smart cloud service lighting display system suitable for travel.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims, as well as in the drawings, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the invention described herein are capable of operation in sequences other than those illustrated or described herein.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the processes do not mean the execution sequence, and the execution sequence of the processes should be determined by the functions and the internal logic of the processes, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
It should be understood that in the present application, "comprising" and "having" and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that, in the present invention, "a plurality" means two or more. "and/or" is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "comprises A, B and C" and "comprises A, B, C" means that A, B, C all comprise, "comprises A, B or C" means that one of three A, B, C is comprised, "comprises A, B and/or C" means that any 1 or any 2 or 3 of three A, B, C are comprised.
It should be understood that in the present invention, "B corresponding to a", "a corresponds to B", or "B corresponds to a" means that B is associated with a, and B can be determined from a. Determining B from a does not mean determining B from a alone, but may be determined from a and/or other information. And the matching of A and B means that the similarity of A and B is greater than or equal to a preset threshold value.
As used herein, "if" can be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on context.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
As shown in fig. 1, an application scenario schematic diagram of the technical scheme provided by the present invention includes a cloud server and a plurality of projection lamps disposed at different positions, the projection lamps perform data interaction with the cloud server through a communication module, the cloud server sends an image to be displayed in real time by the projection lamps to the projection lamps, and the projection lamps display the corresponding image. Different image acquisition devices are arranged at different positions, the image acquisition devices can be cameras, and the images of pedestrians passing through the corresponding positions can be acquired through the image acquisition devices.
The invention provides a smart cloud service illumination display method suitable for text travel, as shown in fig. 2, comprising the following steps:
step S110, the cloud server determines a first projection lamp set at a first position, determines a first travel image set by the first projection lamp according to a position number of the first position, and randomly determines a travel image in the first travel image set as a first travel image.
The cloud server firstly determines a first projection lamp arranged at a first position, wherein the first position can be a garden park, a travel park and the like, the first projection lamp can be a spotlight with an image display function, and a first travel image set arranged on the first projection lamp is determined according to a position serial number of the first position. The invention can randomly determine a text travel image in the first text travel image set as the first text travel image. The first text travel image may be an image having landscape elements, historic elements, cultural elements. For example, the first location is a garden park, the image of the garden park at that time.
In a possible implementation manner of the technical solution provided by the present invention, as shown in fig. 3, step S110 includes:
step S1101, after the first projection lamp determines that the time for displaying the current travel chart is greater than or equal to a first preset time, an image display request is generated, and the first projection lamp sends the image display request and the first position where the image display request is located to the cloud server. The invention sets a first preset time, namely after the time for displaying the current travel chart is more than or equal to the first preset time, the next travel chart needs to be displayed, at the moment, the first projection lamp generates an image display request, and the first projection lamp sends the image display request and the first position where the image display request is located to the cloud server. The staff can set up corresponding first position, position serial number to every first projection lamp.
Step S1102, after receiving the image display request, the cloud server determines a first projection lamp set at the first position, and determines a first travel image set by the first projection lamp according to a position number of the first position, where each position number has a first travel image set corresponding to the position number. The cloud server can confirm the first text image set that first projection lamp set up according to the position sequence number of first position after receiving the image show request, and different first projection lamps may set up different first text image sets in advance, can include the first text image that a plurality of staff set up in the first text image set. The location number may be location 1, location 2, location 3, etc.
Step S1103, deleting the second text image displayed in the current text image from the first text image set, and randomly selecting one text image from the first text image set as the first text image. The invention can delete the second travel image displayed by the current travel image from the first travel image set, and the mode can ensure that the randomly selected next first travel image is asynchronous with the second travel image at the previous moment, thereby ensuring that the travel images displayed by the first projection lamp in the adjacent time quantum are different.
Step S120, determining a first image capturing device set at a first position, acquiring a first captured image set captured by the first image capturing device within a preset time period, and randomly determining a preset number of first captured images corresponding to the first travel image in the first captured image set. The method determines a corresponding first image acquisition device arranged at a first position, and acquires a first acquisition image set acquired by the first image acquisition device within a preset time period. The first image acquisition device acquires images of all people passing through the position of the first image acquisition device to obtain a corresponding first acquired image set, the first acquired images with the preset number corresponding to the first travel images can be randomly determined in the first acquired image set, the preset number can be 1, 2 and the like, and the preset number corresponding to each first travel image can be preset. For example, a first travel image may be fused with 1 first captured image, and the preset number may be 1, for example, a first travel image may be fused with 2 first captured images, and the preset number may be 2. The first captured image may be regarded as an image of a human body, i.e. an image having a face of a human body.
Step S130, recognizing the face contour in the first collected image to obtain a face subimage, performing binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and performing asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjusted image. The invention can identify the face outline in the first collected image to obtain the face subimage, and in the actual application scene, different people can have different first collected images when passing through the corresponding first positions, so the invention can obtain the face subimages of different people after identifying the face outline in the first collected image. The face image binarization method based on the image matching can carry out binarization processing on the face sub-image to obtain a first binarized image only with white pixel points and black pixel points, and carry out primary processing on the image in such a way, so that the face part under different lighting conditions and different light conditions can be correspondingly identified and displayed. The black pixel points with different gray scales correspond to different parts of the human face. The invention can asynchronously process the gray levels of the black pixels according to the part of the face corresponding to each black pixel to obtain a first adjustment image. The gray scale may be 0 to 255 gray scales, and the gray scale values corresponding to different parts may be different, for example, the gray scale of an eye may be 0, the gray scale of an eyebrow may be 50, the gray scale of an ear may be 100, and the like.
In one possible implementation manner, the technical solution provided by the present invention, in step S130, includes:
the face contour in the first collected image is recognized based on OpeanCV to obtain a face subimage, the pixel value of each pixel point in the face subimage is obtained, the pixel points in a preset pixel interval are converted into black pixel points, the pixel points in a non-preset pixel interval are converted into white pixel points, and a first binary image is obtained. The face image acquisition method based on OpeanCV can identify the face contour in the first collected image to obtain the face subimage which can be regarded as the image corresponding to the face contour, and after the face subimage is obtained, the pixel value of each pixel point in the face subimage can be obtained, the pixel points in the preset pixel interval are converted into black pixel points, and the pixel points in the non-preset pixel interval are converted into white pixel points. For example, the eyes and eyebrows of a person may be black and white, the pixel values corresponding to the eyes and eyebrows at this time are the preset pixel intervals corresponding to black and white, the lips may be red and pink, and the pixel values corresponding to the lips at this time are the preset pixel intervals corresponding to red and pink. In an actual application scene, in order to identify the color of the face of the user, the face of the human body can be irradiated and supplemented with light.
Detecting each part in the face subimage based on a Haar cascade detector, wherein the part comprises any one or more of a face, eyes, a nose, a mouth and ears, and determining pixel points corresponding to each part in the face subimage to obtain a pixel point set corresponding to each part. The method can detect each part in the face subimage based on a Haar cascade detector, determine a plurality of parts in the face subimage, and then determine pixel points corresponding to each part in the face subimage to obtain a pixel point set corresponding to each part.
And carrying out asynchronous adjustment on the gray level of the corresponding pixel point according to the part corresponding to each pixel point set to obtain the adjusted pixel value of each pixel point set, and taking an image formed by the adjusted pixel value of each pixel point set as a first adjustment image. According to the technical scheme provided by the invention, the gray levels of the corresponding pixel points can be asynchronously adjusted for each part, so that each part has different gray levels in the display process. For example, the gray scale of the pixel value of the pixel point corresponding to the mouth may be adjusted to 60. By the method, the gray levels of all the pixel point sets can be asynchronously adjusted according to different positions, so that the adjusted pixel value of each pixel point set is obtained, and an image formed by the adjusted pixel values of each pixel point set is used as a first adjusted image.
In a possible embodiment, the asynchronously adjusting the gray level of the corresponding pixel point according to the corresponding portion of each pixel point set to obtain the adjusted pixel value of each pixel point set, and taking an image formed by the adjusted pixel value of each pixel point set as a first adjusted image includes:
and determining the gray coefficient value of the part corresponding to each pixel point set, and adjusting the gray of the corresponding pixel point according to the gray coefficient value to obtain the adjusted pixel value of each pixel point set. The invention determines the gray scale coefficient value of the part corresponding to each pixel point set, the gray scale coefficient value of the pixel point set corresponding to the face part can be 255, 240 and the like, and the gray scale coefficient value of the pixel point set corresponding to the ear part can be 230 and the like. Through the above manner, the invention can convert the gray value of the face image, and make different parts have different gray values, for example, the color of the ear is similar to the color of the face (face), then the gray values of the face and the face may be similar, the colors of the eyebrow and the eye are similar, and then the gray values of the eyebrow and the eye may be similar.
And taking the number of the pixels in the pixel set corresponding to the face as a first number, taking the number of the pixels in the pixel set corresponding to other parts except the face as a second number, and calculating according to the second number and the first number to obtain the pixel proportion of each part. The invention can count the number of the pixels in the pixel set corresponding to the face, count the number of the pixels in the pixel sets corresponding to other parts, and calculate according to the second number and the first number to obtain the pixel proportion of each part. The proportion of each part relative to the human face can be obtained through the pixel proportion.
If the pixel proportion of any part is judged to be larger than the preset maximum proportion or smaller than the preset minimum proportion, the number of the pixels in the pixel set corresponding to the corresponding part is adjusted to obtain an adjusted pixel set, and a first adjusted image is obtained according to the adjusted pixel set. The proportion of a part relative to the human face can be reflected through the pixel proportion, and an image formed by the corresponding organ and the face is more attractive in appearance in a certain gold proportion interval of a certain part, so that the pixel proportion of any part is compared with the preset maximum proportion or the preset minimum proportion, namely when any part is too large or too small, the pixel proportion of the corresponding part is larger than the preset maximum proportion or smaller than the preset minimum proportion. In order to make the user image and the travel image more attractive after being fused, the number of the pixels in the pixel set corresponding to the corresponding part is adjusted to obtain the adjusted pixel set, and the corresponding part is increased or decreased to obtain a first adjusted image.
In a possible embodiment, if it is determined that the pixel proportion of any one of the portions is greater than the preset maximum proportion or less than the preset minimum proportion, adjusting the number of pixels in the pixel set corresponding to the corresponding portion to obtain an adjusted pixel set, and obtaining a first adjusted image according to the adjusted pixel set, the method includes:
and if the pixel ratio of any part is judged to be larger than the preset maximum ratio, calculating according to the pixel ratio of the part and the preset maximum ratio to obtain a pixel point adjusting coefficient. At this time, the corresponding part is too large compared with the human face, so the pixel point adjustment coefficient can be obtained by calculating according to the pixel ratio of the part and the preset maximum ratio. If the pixel occupation ratio is larger than the preset maximum occupation ratio, the corresponding pixel point adjustment coefficient is larger.
And calculating according to the pixel point adjustment coefficients and the number of the pixel points in the pixel point set corresponding to the corresponding parts to obtain the increased and adjusted pixel point adjustment number, and deleting partial edge pixel points in the pixel point set according to the pixel point adjustment number to obtain the deleted and adjusted pixel point set. The invention can combine the pixel point adjusting coefficient and the number of the pixel points in the pixel point set corresponding to the corresponding part to carry out comprehensive calculation, determine to increase the adjusted pixel point adjusting number, if the pixel point adjusting coefficient is larger, the corresponding pixel point adjusting number is more, and the invention can delete part of the edge pixel points in the pixel point set more.
In a possible embodiment, the calculating, according to the pixel point adjustment coefficient and the number of pixel points in the pixel point set corresponding to the corresponding portion, to obtain an increased adjusted pixel point adjustment number, and deleting, according to the pixel point adjustment number, a part of edge pixel points in the pixel point set to obtain a deleted and adjusted pixel point set includes:
the adjustment number of the pixel points for deleting the adjustment is calculated by the following formula,
Figure DEST_PATH_IMAGE015
wherein the content of the first and second substances,
Figure 620816DEST_PATH_IMAGE002
in order to delete the adjusted pixel point adjustment coefficients,
Figure 24115DEST_PATH_IMAGE003
is as follows
Figure 548638DEST_PATH_IMAGE004
The adjustment number of the pixel points of the increased adjustment of the corresponding part,
Figure 975071DEST_PATH_IMAGE005
is as follows
Figure 243241DEST_PATH_IMAGE004
A second number of pixels in the set of pixels for the corresponding location,
Figure 501047DEST_PATH_IMAGE006
in the form of a first number of bits,
Figure 337416DEST_PATH_IMAGE007
is composed of
Figure 110200DEST_PATH_IMAGE004
The weight value of each of the corresponding locations is,
Figure 57427DEST_PATH_IMAGE008
is a preset maximum ratio. By passing
Figure 232057DEST_PATH_IMAGE002
Calculating to obtain the pixel ratio of each part, if
Figure 301644DEST_PATH_IMAGE002
The larger the size is, the smaller the adjustment number of the adjusted pixel points
Figure 437090DEST_PATH_IMAGE003
The larger.
And determining edge pixel points in the pixel point set according to the position relation of all the pixel points in the pixel point set, and deleting all the edge pixel points in the pixel point set if the number of the edge pixel points is judged to be more than or equal to the pixel point adjustment number, so as to obtain the deleted and adjusted pixel point set. According to the technical scheme provided by the invention, the edge pixel points in the pixel point set can be determined according to the position relation of all the pixel points in the pixel point set, and when the number of the edge pixel points is more than or equal to the adjustment number of the pixel points, the adjustment number of the pixel points is relatively small, so that all the edge pixel points in the pixel point set only need to be deleted at the moment, and the pixel point set after deletion adjustment is obtained. In this way, it is possible to perform deletion adjustment on a portion with a small number of pixel point adjustments.
And if the number of the edge pixel points is judged to be smaller than the pixel point adjustment number, determining the ratio of the pixel point adjustment number to the edge pixel point number to obtain a first deletion layer value, and rounding the first deletion layer value. At this time, the number of edge pixel points needing to be adjusted is large, and only the outermost edge pixel points and the pixel points adjacent to the edge pixel points are deleted, so that calculation needs to be performed according to the ratio of the adjusted number of the pixel points to the number of the edge pixel points, and a first deletion layer value is obtained.
Deleting all edge pixels in the pixel point set to obtain a deleted and adjusted pixel point set, and determining edge pixels in the adjusted pixel point set again to enable the determination times of the edge pixels in the pixel point set to be synchronous with the first deletion layer number. According to the invention, all edge pixel points in the pixel point set can be deleted, and through the method, the edge pixel points in the current pixel point set can be deleted. According to the invention, after the edge pixel points which are deleted once are deleted, the edge pixel points in the adjusted pixel point set are determined again, so that the determining times of the edge pixel points in the pixel point set are synchronous with the first deleting layer number, the corresponding pixel point set can gradually delete the edge pixel points which correspond to the outer part and the edge for many times, and the number of the pixel points in the pixel point set corresponding to the corresponding part is further reduced.
And if the pixel proportion of any part is judged to be smaller than the preset maximum proportion, calculating according to the pixel proportion of the part and the preset maximum proportion to obtain a pixel point adjusting coefficient. At this time, the corresponding part is too small compared with the human face, so the pixel point adjustment coefficient can be obtained by calculating according to the pixel ratio of the part and the preset minimum ratio. If the pixel occupation ratio is smaller than the preset maximum occupation ratio, the corresponding pixel point adjustment coefficient is larger.
Calculating according to the pixel point adjustment coefficients and the number of pixel points in the pixel point set corresponding to the corresponding parts to obtain the pixel point adjustment number increased and adjusted, locking partial edge pixel points in the pixel point set according to the pixel point adjustment number, and classifying other pixel points adjacent to the edge pixel points into the pixel point set to obtain the pixel point set increased and adjusted. According to the invention, other pixel points adjacent to the edge pixel point can be determined, and at the moment, the corresponding other pixel points can be regarded as facial pixel points. By the method, partial edge pixel points in the pixel point set can be locked by combining the adjustment quantity of the pixel points, so that the pixel points in the pixel point set are increased, and the corresponding parts are enlarged and adjusted to be larger relative to the face.
In a possible embodiment, the technical solution provided by the present invention calculates the number of pixels in a pixel set corresponding to the corresponding portion according to the pixel adjustment coefficient to obtain the adjustment number of pixels with reduced adjustment, locks some edge pixels in the pixel set according to the adjustment number of pixels, classifies other pixels adjacent to the edge pixels into the pixel set, and obtains an increased adjusted pixel set, including:
the adjustment number of pixels to increase the adjustment is calculated by the following formula,
Figure DEST_PATH_IMAGE016
wherein the content of the first and second substances,
Figure 716237DEST_PATH_IMAGE010
in order to reduce the number of pixel point adjustments to be adjusted,
Figure 745373DEST_PATH_IMAGE011
is as follows
Figure 251441DEST_PATH_IMAGE012
The number of pixel points of each corresponding part which are reduced and adjusted,
Figure 608604DEST_PATH_IMAGE011
is as follows
Figure 287847DEST_PATH_IMAGE012
A second number of pixels in the set of pixels for the corresponding location,
Figure 46856DEST_PATH_IMAGE006
in the form of a first number of bits,
Figure 723825DEST_PATH_IMAGE013
is composed of
Figure 692918DEST_PATH_IMAGE012
The weight value of the corresponding position is set,
Figure 520059DEST_PATH_IMAGE014
is a preset minimum occupancy ratio. By passing
Figure 523787DEST_PATH_IMAGE010
Calculating to obtain the pixel ratio of each part, if
Figure 309341DEST_PATH_IMAGE002
The smaller the adjustment is, the larger the adjustment number of the adjusted pixel points
Figure DEST_PATH_IMAGE017
The larger.
And determining edge pixel points in the pixel point set according to the position relation of all the pixel points in the pixel point set, and determining all the pixel points of the face adjacent to each edge pixel point as pixel points to be added. The invention can determine the edge pixel points by combining the position relations of all the pixel points in the pixel point set, and then determine the pixel points to be added according to the edge pixel points. By the mode, faces of pixel points adjacent to the corresponding parts can be converted into the corresponding parts, so that the corresponding parts and the faces have reasonable proportion, and the corresponding parts are more visually coordinated.
Counting the number of all the pixels to be increased to obtain a first increased number, and classifying all the pixels to be increased into the pixel point set if the first increased number is larger than or equal to the pixel point adjustment number. Under this kind of scene, the increase proportion of corresponding position is less this moment, only need at this moment according to corresponding marginal pixel point increase once corresponding wait to increase pixel can.
If the first increased number is smaller than the pixel point adjusting number, determining the ratio of the pixel point adjusting number to the number of the pixel points to be increased to obtain a first increased layer number, and rounding the first increased layer number;
classifying all the pixels to be increased into the pixel point set to obtain a first increased and increased pixel point set, and determining the edge pixel points and the pixels to be increased in the adjusted pixel point set again to synchronize the determination times of the edge pixel points in the pixel point set with the first increased layer number. Under this kind of scene, the increase proportion of corresponding position is great this moment, needs to increase corresponding pixel point of waiting to increase many times according to corresponding marginal pixel point this moment. After each time of adjusting the pixel points of the pixel point set, the edge pixel points in the pixel point set can be changed, so that the invention can determine different edge pixel points and pixel points to be added for many times. It should be noted that, at this time, the determined number of times of the edge pixel point is synchronized with the first added layer number, that is, the larger the first added layer number is, the larger the range of the corresponding portion is increased.
In a possible implementation manner, the technical scheme provided by the invention determines the edge pixel point through the following steps:
and acquiring the pixel value of the adjacent pixel point of each pixel point in the pixel point set, and if the pixel points in contact with the pixel points of other parts exist, determining the corresponding pixel points as edge pixel points. By the method, the edge pixel points in each part and the pixel point set can be rapidly determined.
Step S140, obtaining a first pixel value of a pixel point in the first travel image, and adjusting the pixel value of a black pixel point with different gray levels in the first adjusted image according to the first pixel value to obtain a second adjusted image. The invention can adjust the black pixel points with different gray levels in the first adjusting image according to the first pixel value of the pixel points in the first text travel image, the black pixel points are not pure black pixel points, but pixel points with certain gray levels, for example, the pixel points with the gray level less than 200 can be the black pixel points in the invention. The invention can adjust according to the pixel values of the black pixel points with different gray levels to obtain a second adjusted image.
In one possible implementation manner, the technical solution provided by the present invention, in step S140, includes:
and selecting a second pixel value interval corresponding to the first pixel value. In the present invention, a corresponding second pixel value interval is set for each first pixel value, for example, the first pixel value is red, the second pixel value interval may correspond to green at this time, all the second pixel values in the second pixel value interval may be green, dark green, light green, and the like, and the green, dark green, and light green may correspond to different second pixel values.
And determining a plurality of second sub-pixel values respectively corresponding to the second pixel value intervals according to the pixel values of the black pixel points with different gray levels in the first adjustment image. According to the invention, the second sub-pixel values corresponding to the black pixel points with different gray levels are determined according to the pixel values of the black pixel points with different gray levels in the first adjustment image, for example, the gray level is 0, and the second sub-pixel values are dark black at the moment, so that the corresponding second pixel values can be dark green at the moment; when the gray scale is 50, the gray scale of the pixel point is not particularly black, so that the corresponding second pixel value can be green at the moment; when the gray scale is 100, the gray scale of the pixel is white, so that the corresponding second pixel value may be light green. By the mode, the corresponding colors of all parts have certain color difference, and the decomposition and the distinction of the face and all parts in the face are easy to distinguish. Through this kind of mode, can make the text travel image of different colour bottom plates, correspond the face image of corresponding colour for text travel image can not produce when fusing with face image and move in the same direction as the colour, difficult difference the wait the condition.
And replacing and adjusting the pixel values of the black pixel points with different gray levels in the first adjusted image according to the second sub-pixel value to obtain a second adjusted image. The invention can replace and adjust the pixel values of the black pixel points with different gray levels in the first adjusted image by the second sub-pixel value, and the color of the face in the second adjusted image obtained at the moment corresponds to the color of the travel image.
And S150, determining the size of a reserved image of an image reserved position in the first travel image, adjusting the size of the second adjusted image, and fusing the second adjusted image and the first travel image to obtain a second travel image. In an actual application scenario, the sizes of the reserved images and the number of the reserved images in different first travel images may be different, but the sizes of the second adjustment images are relatively fixed, so the size of the second adjustment image is adjusted according to the size of the reserved image, and after the size of the reserved image corresponds to the size of the second adjustment image, the second adjustment image and the first travel image are fused to obtain the second travel image.
In one possible implementation manner of the technical solution provided by the present invention, step S150 includes:
and comparing the size of the reserved image with the size of the second adjustment image to obtain a first adjustment proportion. The invention will make the ratio of the reserved image size to the second adjusted image size, for example, the reserved image size is 10 inches, the second adjusted image size is 20 inches, at this time, the first adjustment ratio is half, that is, the second adjusted image size is reduced by half, so that the second adjusted image size corresponds to the reserved image size.
And adjusting the size of the second adjustment image based on the first adjustment proportion, and determining a central pixel point of the adjusted second adjustment image as a first reference point. The size of the second adjustment image is adjusted according to the first adjustment proportion, namely the second adjustment image is reduced by 20 inches to obtain a 10-inch second adjustment image, and after the second adjustment image is adjusted, the first reference point is determined in the second adjustment image.
And determining a central pixel point of an image reserved position as a second reference point, and overlapping the first reference point and the second reference point so as to fuse a second adjustment image and the first travel image to obtain a second travel image. The central pixel point of the image reserved position is used as a second reference point, and the second adjustment image and the image reserved position are aligned by combining the first reference point and the second reference point, so that the second adjustment image and the first travel image can be effectively fused, and the second travel image finally used for displaying is obtained.
And step S160, the first projection lamp performs illumination display on the first position according to the second travel image. After the second text image is obtained, the corresponding second text image can be illuminated and displayed, so that passers can see the second text image and the corresponding text image to be fused, and the interactivity of tourists, travelers and corresponding theme parks is improved.
In order to implement the smart cloud service illumination display method suitable for the text trip provided by the present invention, the present invention further provides a smart cloud service illumination display system suitable for the text trip, as shown in fig. 4, including:
the cloud server comprises a determining module, a processing module and a processing module, wherein the determining module is used for enabling the cloud server to determine a first projection lamp arranged at a first position, determining a first text image set arranged by the first projection lamp according to a position serial number of the first position, and randomly determining a text image in the first text image set as a first text image;
the acquisition module is used for determining a first image acquisition device arranged at a first position, acquiring a first acquisition image set acquired by the first image acquisition device within a preset time period, and randomly determining a preset number of first acquisition images corresponding to first travel images in the first acquisition image set;
the identification module is used for identifying the face contour in the first acquired image to obtain a face subimage, carrying out binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and carrying out asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjustment image;
the adjusting module is used for acquiring a first pixel value of a pixel point in the first text image, and adjusting the pixel value of a black pixel point with different gray levels in the first adjusting image according to the first pixel value to obtain a second adjusting image;
the fusion module is used for determining the size of a reserved image of an image reserved position in the first travel image, adjusting the size of the second adjusted image, and fusing the second adjusted image and the first travel image to obtain a second travel image;
and the display module is used for enabling the first projection lamp to carry out illumination display on the first position according to the second text image.
The present invention also provides a storage medium having a computer program stored therein, the computer program being executable by a processor to implement the methods provided by the various embodiments described above.
The storage medium may be a computer storage medium or a communication medium. Communication media includes any medium that facilitates transfer of a computer program from one place to another. Computer storage media may be any available media that can be accessed by a general purpose or special purpose computer. For example, a storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuits (ASIC). Additionally, the ASIC may reside in user equipment. Of course, the processor and the storage medium may reside as discrete components in a communication device. The storage medium may be read-only memory (ROM), random-access memory (RAM), CD-ROMs, magnetic tapes, floppy disks, optical data storage devices, and the like.
The present invention also provides a program product comprising execution instructions stored in a storage medium. The at least one processor of the device may read the execution instructions from the storage medium, and the execution of the execution instructions by the at least one processor causes the device to implement the methods provided by the various embodiments described above.
In the above embodiments of the terminal or the server, it should be understood that the Processor may be a Central Processing Unit (CPU), other general-purpose processors, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. The intelligent cloud service illumination display method suitable for the text travel is characterized by comprising the following steps:
the cloud server determines a first projection lamp arranged at a first position, determines a first text travel image set arranged by the first projection lamp according to the position serial number of the first position, and randomly determines a text travel image in the first text travel image set as a first text travel image;
determining a first image acquisition device arranged at a first position, acquiring a first acquisition image set acquired by the first image acquisition device within a preset time period, and randomly determining a preset number of first acquisition images corresponding to first travel images in the first acquisition image set;
identifying the face contour in the first collected image to obtain a face subimage, carrying out binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and carrying out asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjustment image;
acquiring a first pixel value of a pixel point in the first text image, and adjusting the pixel value of a black pixel point with different gray levels in a first adjustment image according to the first pixel value to obtain a second adjustment image;
determining the size of a reserved image of an image reserved position in the first text image, adjusting the size of the second adjusted image, and fusing the second adjusted image and the first text image to obtain a second text image;
the first projection lamp is used for carrying out illumination display on the first position according to the second text travel image.
2. The intelligent cloud service lighting display method for textual travel of claim 1,
the cloud server confirms the first projection lamp that sets up in first position department, confirms the first text image set that first projection lamp set up according to the position number of first position, confirms a text image as first text image in first text image set at random, includes:
after the first projection lamp judges that the time for displaying the current travel chart is greater than or equal to first preset time, an image display request is generated, and the first projection lamp sends the image display request and the first position where the image display request is located to a cloud server;
the cloud server determines a first projection lamp arranged at a first position after receiving the image display request, and determines a first travel image set arranged by the first projection lamp according to the position serial number of the first position, wherein each position serial number is provided with a first travel image set corresponding to the position serial number;
and deleting the second text travel image displayed by the current text travel image from the first text travel image set, and randomly selecting one text travel image from the first text travel image set as the first text travel image.
3. The intelligent cloud service lighting display method for textual travel of claim 2,
the method comprises the steps of identifying the face contour in the first collected image to obtain a face subimage, carrying out binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and carrying out asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjusted image, and comprises the following steps:
identifying a face contour in the first acquired image based on OpeanCV to obtain a face subimage, acquiring a pixel value of each pixel point in the face subimage, converting the pixel points in a preset pixel interval into black pixel points, and converting the pixel points in a non-preset pixel interval into white pixel points to obtain a first binary image;
detecting each part in the face subimage based on a Haar cascade detector, wherein the part comprises any one or more of a face, eyes, a nose, a mouth and ears, and determining pixel points corresponding to each part in the face subimage to obtain a pixel point set corresponding to each part;
and carrying out asynchronous adjustment on the gray level of the corresponding pixel point according to the part corresponding to each pixel point set to obtain the adjusted pixel value of each pixel point set, and taking an image formed by the adjusted pixel value of each pixel point set as a first adjustment image.
4. The intelligent cloud service lighting display method for textual travel of claim 3,
the method comprises the following steps of asynchronously adjusting the gray level of corresponding pixel points according to the corresponding part of each pixel point set to obtain the adjusted pixel value of each pixel point set, and taking an image formed by the adjusted pixel value of each pixel point set as a first adjusted image, wherein the method comprises the following steps:
determining the gray coefficient value of a part corresponding to each pixel point set, and adjusting the gray of the corresponding pixel point according to the gray coefficient value to obtain the adjusted pixel value of each pixel point set;
taking the number of pixels in the pixel point set corresponding to the face as a first number, taking the number of pixels in the pixel point set corresponding to other parts except the face as a second number, and calculating according to the second number and the first number to obtain the pixel ratio of each part;
if the pixel proportion of any part is judged to be larger than the preset maximum proportion or smaller than the preset minimum proportion, the number of the pixels in the pixel set corresponding to the corresponding part is adjusted to obtain an adjusted pixel set, and a first adjusted image is obtained according to the adjusted pixel set.
5. The intelligent cloud service lighting display method for textual travel of claim 4,
if the pixel proportion of any one part is judged to be larger than the preset maximum proportion or smaller than the preset minimum proportion, the number of the pixels in the pixel set corresponding to the corresponding part is adjusted to obtain an adjusted pixel set, and a first adjusted image is obtained according to the adjusted pixel set, wherein the method comprises the following steps:
if the pixel proportion of any part is judged to be larger than the preset maximum proportion, calculating according to the pixel proportion of the part and the preset maximum proportion to obtain a pixel point adjusting coefficient;
calculating according to the pixel point adjustment coefficients and the number of pixel points in the pixel point set corresponding to the corresponding parts to obtain the increased and adjusted pixel point adjustment number, and deleting partial edge pixel points in the pixel point set according to the pixel point adjustment number to obtain a deleted and adjusted pixel point set;
if the pixel proportion of any part is judged to be smaller than the preset maximum proportion, calculating according to the pixel proportion of the part and the preset maximum proportion to obtain a pixel point adjustment coefficient;
and calculating the number of the pixels in the pixel set corresponding to the corresponding part according to the pixel point adjustment coefficient to obtain the adjustment number of the pixels with reduced adjustment, locking part of edge pixels in the pixel set according to the adjustment number of the pixels, and classifying other pixels adjacent to the edge pixels into the pixel set to obtain the pixel set with increased adjustment.
6. The intelligent cloud service lighting display method for textual travel of claim 5,
the method includes the steps of calculating the number of pixels in a pixel set corresponding to corresponding parts according to the pixel point adjustment coefficients to obtain the number of pixel point adjustments which are reduced and adjusted, deleting partial edge pixels in the pixel set according to the number of the pixel point adjustments to obtain the pixel set after deletion adjustment, and the method includes the following steps:
the adjustment number of the pixel points to reduce the adjustment is calculated by the following formula,
Figure DEST_PATH_IMAGE001
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE002
in order to reduce the adjustment coefficient of the adjusted pixel points,
Figure DEST_PATH_IMAGE003
is as follows
Figure DEST_PATH_IMAGE004
The adjustment quantity of the pixel points of the increased adjustment of each corresponding part,
Figure DEST_PATH_IMAGE005
is as follows
Figure 810637DEST_PATH_IMAGE004
A second number of pixels in the set of pixels for the corresponding location,
Figure DEST_PATH_IMAGE006
in the form of a first number of bits,
Figure DEST_PATH_IMAGE007
is composed of
Figure 582415DEST_PATH_IMAGE004
The weight value of the corresponding position is set,
Figure DEST_PATH_IMAGE008
is a preset maximum ratio;
determining edge pixel points in the pixel point set according to the position relation of all the pixel points in the pixel point set, and deleting all the edge pixel points in the pixel point set if the number of the edge pixel points is judged to be more than or equal to the adjusted number of the pixel points to obtain a deleted and adjusted pixel point set;
if the number of the edge pixel points is judged to be smaller than the pixel point adjustment number, determining the ratio of the pixel point adjustment number to the edge pixel point number to obtain a first deletion layer value, and rounding the first deletion layer value;
deleting all edge pixels in the pixel point set to obtain a deleted and adjusted pixel point set, and determining edge pixels in the adjusted pixel point set again to enable the determination times of the edge pixels in the pixel point set to be synchronous with the first deletion layer number.
7. The intelligent cloud service lighting display method for textual travel of claim 5,
calculating the number of pixels in the pixel set corresponding to the corresponding parts according to the pixel point adjustment coefficient to obtain the pixel point adjustment number for reducing adjustment, locking partial edge pixels in the pixel set according to the pixel point adjustment number, classifying other pixels adjacent to the edge pixels into the pixel set, and obtaining the pixel set after increasing adjustment, wherein the method comprises the following steps:
the adjustment number of the pixel points for increasing the adjustment is calculated by the following formula,
Figure DEST_PATH_IMAGE009
wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE010
in order to increase the number of pixel point adjustments to be adjusted,
Figure DEST_PATH_IMAGE011
is as follows
Figure DEST_PATH_IMAGE012
The number of pixel points of each corresponding part which are reduced and adjusted,
Figure 456961DEST_PATH_IMAGE011
is as follows
Figure 272470DEST_PATH_IMAGE012
A second number of pixels in the set of pixels for the corresponding location,
Figure 419418DEST_PATH_IMAGE006
in the form of a first number of bits,
Figure DEST_PATH_IMAGE013
is composed of
Figure 382170DEST_PATH_IMAGE012
The weight value of the corresponding position is set,
Figure DEST_PATH_IMAGE014
is a preset minimum occupation ratio;
determining edge pixel points in the pixel point set according to the position relation of all the pixel points in the pixel point set, and determining all the pixel points of the face adjacent to each edge pixel point as pixel points to be added;
counting the number of all the pixels to be increased to obtain a first increased number, and classifying all the pixels to be increased into a pixel point set if the first increased number is more than or equal to the pixel point adjustment number;
if the first increased number is smaller than the pixel point adjusted number, determining the ratio of the pixel point adjusted number to the number of the pixel points to be increased to obtain a first increased layer number, and rounding the first increased layer number;
classifying all the pixels to be increased into the pixel point set to obtain a first increased and increased pixel point set, and determining the edge pixel points and the pixels to be increased in the adjusted pixel point set again to synchronize the determination times of the edge pixel points in the pixel point set with the first increased layer number.
8. The method for intelligent cloud service lighting exhibition for text travel according to any one of claims 6 or 7,
determining edge pixel points by the following steps:
acquiring the pixel value of an adjacent pixel point of each pixel point in the pixel point set, and if a pixel point in contact with the pixel points of other parts exists, determining the corresponding pixel point as an edge pixel point;
the obtaining of the first pixel value of the pixel point in the first document travel image, and adjusting the pixel value of the black pixel point with different gray levels in the first adjustment image according to the first pixel value to obtain a second adjustment image includes:
selecting a second pixel value interval corresponding to the first pixel value;
determining a plurality of second sub-pixel values respectively corresponding to the second pixel value intervals according to the pixel values of the black pixel points with different gray levels in the first adjustment image;
and replacing and adjusting the pixel values of the black pixel points with different gray levels in the first adjusted image according to the second sub-pixel value to obtain a second adjusted image.
9. The intelligent cloud service lighting display method for textual travel of claim 8,
the determining the reserved image size of the image reserved bit in the first document travel image, after adjusting the size of the second adjusted image, fusing the second adjusted image and the first document travel image to obtain a second document travel image, including:
comparing the size of the reserved image with the size of the second adjustment image to obtain a first adjustment proportion;
adjusting the size of the second adjustment image based on the first adjustment proportion, and determining a central pixel point of the adjusted second adjustment image as a first reference point;
and determining a central pixel point of an image reserved position as a second reference point, and overlapping the first reference point and the second reference point so as to fuse a second adjustment image and the first travel image to obtain a second travel image.
10. Wisdom cloud service illumination display system suitable for text is travelled, its characterized in that includes:
the cloud server comprises a determining module, a processing module and a processing module, wherein the determining module is used for enabling the cloud server to determine a first projection lamp arranged at a first position, determining a first text image set arranged by the first projection lamp according to a position serial number of the first position, and randomly determining a text image in the first text image set as a first text image;
the acquisition module is used for determining a first image acquisition device arranged at a first position, acquiring a first acquisition image set acquired by the first image acquisition device within a preset time period, and randomly determining a preset number of first acquisition images corresponding to first travel images in the first acquisition image set;
the identification module is used for identifying the face contour in the first collected image to obtain a face subimage, carrying out binarization processing on the face subimage to obtain a first binarized image only with white pixel points and black pixel points, and carrying out asynchronous processing on the gray level of the black pixel points according to the part of the face corresponding to each black pixel point to obtain a first adjusted image;
the adjusting module is used for acquiring a first pixel value of a pixel point in the first text image, and adjusting the pixel value of a black pixel point with different gray levels in the first adjusted image according to the first pixel value to obtain a second adjusted image;
the fusion module is used for determining the size of a reserved image of an image reserved position in the first travel image, adjusting the size of the second adjusted image, and fusing the second adjusted image and the first travel image to obtain a second travel image;
and the display module is used for enabling the first projection lamp to carry out illumination display on the first position according to the second text travel image.
CN202211270389.2A 2022-10-18 2022-10-18 Smart cloud service lighting display method and system suitable for text travel Active CN115348709B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211270389.2A CN115348709B (en) 2022-10-18 2022-10-18 Smart cloud service lighting display method and system suitable for text travel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211270389.2A CN115348709B (en) 2022-10-18 2022-10-18 Smart cloud service lighting display method and system suitable for text travel

Publications (2)

Publication Number Publication Date
CN115348709A true CN115348709A (en) 2022-11-15
CN115348709B CN115348709B (en) 2023-03-28

Family

ID=83957327

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211270389.2A Active CN115348709B (en) 2022-10-18 2022-10-18 Smart cloud service lighting display method and system suitable for text travel

Country Status (1)

Country Link
CN (1) CN115348709B (en)

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120268436A1 (en) * 2011-04-20 2012-10-25 Yao-Tsung Chang Display device and method for adjusting gray-level of image frame depending on environment illumination
CN104866843A (en) * 2015-06-05 2015-08-26 中国人民解放军国防科学技术大学 Monitoring-video-oriented masked face detection method
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN109166158A (en) * 2018-08-24 2019-01-08 中国电建集团华东勘测设计研究院有限公司 A kind of forest land canopy density determine method, apparatus and system
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium
CN109672847A (en) * 2017-10-13 2019-04-23 李玉卓 Intelligent safety defense monitoring system based on image recognition technology
CN112488085A (en) * 2020-12-28 2021-03-12 深圳市慧鲤科技有限公司 Face fusion method, device, equipment and storage medium
CN112950661A (en) * 2021-03-23 2021-06-11 大连民族大学 Method for generating antithetical network human face cartoon based on attention generation
CN113989890A (en) * 2021-10-29 2022-01-28 河南科技大学 Face expression recognition method based on multi-channel fusion and lightweight neural network
CN114387441A (en) * 2021-12-11 2022-04-22 中国电信集团系统集成有限责任公司河北分公司 Image processing method and system
CN114565627A (en) * 2022-03-01 2022-05-31 杭州爱科科技股份有限公司 Contour extraction method, device, equipment and storage medium
CN114973368A (en) * 2022-05-27 2022-08-30 平安普惠企业管理有限公司 Face recognition method, device, equipment and storage medium based on feature fusion
WO2022179215A1 (en) * 2021-02-23 2022-09-01 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120268436A1 (en) * 2011-04-20 2012-10-25 Yao-Tsung Chang Display device and method for adjusting gray-level of image frame depending on environment illumination
CN104866843A (en) * 2015-06-05 2015-08-26 中国人民解放军国防科学技术大学 Monitoring-video-oriented masked face detection method
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN109672847A (en) * 2017-10-13 2019-04-23 李玉卓 Intelligent safety defense monitoring system based on image recognition technology
CN109191410A (en) * 2018-08-06 2019-01-11 腾讯科技(深圳)有限公司 A kind of facial image fusion method, device and storage medium
CN109166158A (en) * 2018-08-24 2019-01-08 中国电建集团华东勘测设计研究院有限公司 A kind of forest land canopy density determine method, apparatus and system
CN112488085A (en) * 2020-12-28 2021-03-12 深圳市慧鲤科技有限公司 Face fusion method, device, equipment and storage medium
WO2022179215A1 (en) * 2021-02-23 2022-09-01 北京市商汤科技开发有限公司 Image processing method and apparatus, electronic device, and storage medium
CN112950661A (en) * 2021-03-23 2021-06-11 大连民族大学 Method for generating antithetical network human face cartoon based on attention generation
CN113989890A (en) * 2021-10-29 2022-01-28 河南科技大学 Face expression recognition method based on multi-channel fusion and lightweight neural network
CN114387441A (en) * 2021-12-11 2022-04-22 中国电信集团系统集成有限责任公司河北分公司 Image processing method and system
CN114565627A (en) * 2022-03-01 2022-05-31 杭州爱科科技股份有限公司 Contour extraction method, device, equipment and storage medium
CN114973368A (en) * 2022-05-27 2022-08-30 平安普惠企业管理有限公司 Face recognition method, device, equipment and storage medium based on feature fusion

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
YIFANG LI等: "Fourier-Domain Ultrasonic Imaging of Cortical Bone Based on Velocity Distribution Inversion", 《IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL》 *
冯继强等: "5G+智慧文旅:图书馆文旅融合发展的新模式", 《图书与情报》 *
邱佳梁等: "结合肤色分割与平滑的人脸图像快速美化", 《中国图象图形学报》 *
陈健等: "一种基于Haar小波变换的彩色图像人脸检测方法", 《微计算机信息》 *

Also Published As

Publication number Publication date
CN115348709B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
WO2021088300A1 (en) Rgb-d multi-mode fusion personnel detection method based on asymmetric double-stream network
KR101420681B1 (en) Method and apparatus for generating the depth map of video image
TWI362016B (en) Method for detecting desired objects in a highly dynamic environment by a monitoring system and the monitoring system thereof
US10467800B2 (en) Method and apparatus for reconstructing scene, terminal device, and storage medium
US20140085501A1 (en) Video processing systems and methods
US20020076100A1 (en) Image processing method for detecting human figures in a digital image
JP2019009752A (en) Image processing device
CN109740444B (en) People flow information display method and related product
CN102609724B (en) Method for prompting ambient environment information by using two cameras
WO2012022744A2 (en) Multi-mode video event indexing
WO2009078957A1 (en) Systems and methods for rule-based segmentation for objects with full or partial frontal view in color images
JP2001202525A (en) Method for deciding direction of image including blue sky
US20030025810A1 (en) Displaying digital images
CN111667400A (en) Human face contour feature stylization generation method based on unsupervised learning
CN113111838A (en) Behavior recognition method and device, equipment and storage medium
CN109876416A (en) A kind of rope skipping method of counting based on image information
CN110009650A (en) A kind of escalator handrail borderline region crosses the border detection method and system
CN109325926B (en) Automatic filter implementation method, storage medium, device and system
CN115348709B (en) Smart cloud service lighting display method and system suitable for text travel
CN113724527A (en) Parking space management method
KR101600617B1 (en) Method for detecting human in image frame
Solina et al. 15 seconds of fame-an interactive, computer-vision based art installation
CN111898448A (en) Pedestrian attribute identification method and system based on deep learning
Odetallah et al. Human visual system-based smoking event detection
CN115147868A (en) Human body detection method of passenger flow camera, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant