CN114666501A - Intelligent control method for camera of wearable device - Google Patents

Intelligent control method for camera of wearable device Download PDF

Info

Publication number
CN114666501A
CN114666501A CN202210263511.7A CN202210263511A CN114666501A CN 114666501 A CN114666501 A CN 114666501A CN 202210263511 A CN202210263511 A CN 202210263511A CN 114666501 A CN114666501 A CN 114666501A
Authority
CN
China
Prior art keywords
data
deviation
image
scene
analysis
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210263511.7A
Other languages
Chinese (zh)
Other versions
CN114666501B (en
Inventor
贾德双
赵耘
彭可炜
沈永光
胡望鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Boomtech Industrial Co ltd
Original Assignee
Shenzhen Boomtech Industrial Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Boomtech Industrial Co ltd filed Critical Shenzhen Boomtech Industrial Co ltd
Priority to CN202210263511.7A priority Critical patent/CN114666501B/en
Publication of CN114666501A publication Critical patent/CN114666501A/en
Application granted granted Critical
Publication of CN114666501B publication Critical patent/CN114666501B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Studio Devices (AREA)

Abstract

The invention provides an intelligent control method for a camera of wearable equipment, which comprises the following steps that S01 awakens an adjustable camera group through a preset control instruction to acquire scene data within a preset range; step S02, the image processor analyzes the image according to the received scene data to determine the image deviation; step S03, calculating the deviation ratio of the adjustable camera group according to the deviation and a preset image standard database, and determining a second control strategy of the adjustable camera group; step S04 is to control the adjustable camera group to perform scene data collection again according to the second control strategy, determine second scene data, determine whether the second scene data has the image deviation, and use the second scene data as target scene data when the second scene data has no image deviation.

Description

Intelligent control method for camera of wearable device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an intelligent control method for a camera of wearable equipment.
Background
At present, with the rapid development of the fields of 5G, Internet of things, intelligent robots and the like, the demand on an intelligent camera is higher and higher, and the demand is also higher and higher; the intelligent camera is also applied to more scenes, such as environment safety, face recognition, intelligent scanning and unmanned driving, and needs to face more complex conditions when the intelligent camera is used for image capture, and in addition, in other fields, the camera also has irreplaceable functions, such as well surveying and mapping, conference recording and life shooting, and is applied to humanistic feelings from work, the intelligent camera plays more and more roles, and in light application of life, the camera needs to be more portable and the use scene is more flexible; for example, the camera calibration method, the camera calibration program and the camera calibration device which are applied to the '201810315703.1', multiple-point test is carried out by setting different cameras, so that the accuracy of the cameras is improved, the requirement on three-dimensional coordinate data is higher, the method can improve the shooting breadth and accuracy, but the acquisition standard is single, and the maximum effect cannot be exerted in different use scenes; according to the invention, the portable camera is awakened rapidly through a preset awakening mode, the control strategy is determined according to the use scene, data acquisition is further carried out, the acquired image is subjected to coding and deviation correction, the imaging accuracy and definition are improved, finally, the data acquisition is carried out again through the corrected acquisition mode, fitting analysis is carried out, and the camera shooting efficiency is improved while the camera shooting process and the user safety are ensured in the current scene.
Disclosure of Invention
The invention provides an intelligent control method for a camera of wearable equipment, which is used for realizing intelligent control of camera shooting and acquiring clear imaging under the conditions that different use scenes can be quickly awakened in a specific mode on the basis of portability and the safety of camera shooting data and the safety of users are ensured.
The invention provides an intelligent control method for a camera of wearable equipment, which comprises the following steps:
step S01: awakening the adjustable camera group through a preset control instruction to acquire scene data within a preset range;
step S02: the image processor analyzes images according to the received scene data and determines image deviation;
step S03: calculating the deviation ratio of the adjustable camera group according to the deviation and a preset image standard database, and determining a second control strategy of the adjustable camera group;
step S04: and controlling the adjustable camera group to acquire scene data again according to the second control strategy, determining second scene data, judging whether the second scene data has the image deviation, and taking the second scene data as target scene data when the second scene data has no image deviation.
As an embodiment of the present invention, the step S01 includes:
setting a control instruction according to the control requirement of a user, and setting a corresponding pre-recognition mode according to the control instruction; wherein, the first and the second end of the pipe are connected with each other,
the control instructions include: a voice control instruction, a fingerprint control instruction and a touch control instruction are shot; wherein the content of the first and second substances,
the voice control instruction comprises: a mandarin control instruction, a dialect control instruction, a foreign language control instruction, a long-pitch control instruction and a short-pitch control instruction;
the pre-recognition mode includes: the system comprises a monocular identification mode, a binocular identification mode, a rapid shooting mode and a sound tracing mode.
As an embodiment of the present technical solution, the step S01 further includes:
the adjustable camera group is awakened according to a pre-recognition mode to acquire scene data within a preset range; wherein, the first and the second end of the pipe are connected with each other,
the adjustable camera group is awakened according to a pre-recognition mode, and the method comprises the following steps:
the method comprises the following steps: the user performs control operation on the adjustable camera group to generate a user control data group;
step two: according to the user control data group and the pre-control data group corresponding to the control instruction, carrying out data category similarity calculation to generate a control data similarity value group, carrying out awakening judgment and determining an awakening result; wherein the content of the first and second substances,
when each similar value in the control data similar value group is within the corresponding preset threshold range, awakening successfully;
when one or more similar values in the control data similar value group are not in the corresponding preset threshold range, the awakening is failed;
step three: and when the adjustable camera group is awakened successfully, camera shooting recognition is carried out according to a pre-recognition mode, and scene data is obtained.
As an embodiment of the present invention, the step S02 includes:
the image processor judges the image type according to the received scene data and determines the image type of the scene data; wherein the content of the first and second substances,
the image types include: video data, image data;
when the image type of the scene data is video data, performing video framing to obtain corresponding frame images, performing image deviation analysis, and determining a deviation result;
when the image type of the scene data is image data, performing image deviation analysis to generate deviation data; wherein the content of the first and second substances,
the image deviation analysis comprises: image definition analysis, image integrity analysis and image exposure analysis.
As an embodiment of the present invention, the image deviation analysis includes:
the photographic processor extracts image preprocessing data in a preset image database according to the scene data;
calculating the matching degree of the image data based on the image data and the image preprocessing data, performing deviation judgment, and determining a deviation judgment result; wherein, the first and the second end of the pipe are connected with each other,
when the matching degree of the image data is greater than or equal to a preset matching degree threshold value, the image has no image deviation;
and when the matching degree of the image data is smaller than a preset matching degree threshold value, the image has image deviation.
As an embodiment of the present technical solution, the step S03 includes:
the image pickup processor carries out image pickup coding on the deviation image corresponding to the deviation data according to the received deviation data to generate coded imaging; wherein the content of the first and second substances,
the image capture code includes the steps of:
step S100: generating a deviation division area by carrying out linear analysis on the deviation data; wherein the content of the first and second substances,
the deviation dividing region includes: a linear deviation region and a nonlinear deviation region;
step S200: carrying out corresponding phase data modulation analysis according to the deviation dividing region to obtain first coded data, carrying out data classification on the first coded data, and determining the type of the coded data; wherein the content of the first and second substances,
the encoding data category includes: high frequency coded data, low frequency coded data;
step S300: and establishing a coding matrix according to the first coded data, and generating a coded image.
As an embodiment of the present technical solution, the step S03 further includes:
performing data frequency screening based on the coded imaging to determine high-frequency data; disassembling the coded imaging according to the high-frequency data to generate a coded subgraph; generating a decoding phase diagram by integrating the coding subgraphs; decoding and recovering according to the decoded phase diagram to obtain first decoding data and generate a decoded image;
the integrated process comprises: subgraph translation processing, subgraph rotation processing and subgraph accumulation processing.
As an embodiment of the present technical solution, the step S03 further includes:
performing deviation calculation through the first decoding data and decoding comparison data in a preset decoding database to generate a deviation rate of the adjustable camera group, and performing deviation rate judgment; wherein the content of the first and second substances,
when the deviation ratio of the adjustable camera group is within a preset threshold value range, strategy deviation correction is not needed;
when the deviation ratio of the adjustable camera group is not within the preset threshold range, performing strategy deviation correction; wherein the content of the first and second substances,
the strategy deviation rectifying comprises the following steps: strategy matching is carried out through the deviation ratio of the adjustable camera group and a preset deviation comparison table, a deviation rectifying strategy corresponding to the strategy deviation value is determined, strategy adjustment is carried out according to the deviation rectifying strategy, and a second control strategy is generated; wherein the content of the first and second substances,
the deviation rectifying strategy comprises the following steps: the system comprises a rotation adjusting strategy capable of adjusting a camera group, a displacement adjusting strategy capable of adjusting the camera group and a lighting adjusting strategy capable of adjusting the camera group.
As an embodiment of the present technical solution, the step S04 includes:
shooting and collecting through a second control strategy to obtain second scene data; judging whether the second scene data has the image deviation or not, and determining a judgment result; wherein the content of the first and second substances,
when the second scene has no image deviation, taking the second scene data as target scene data;
when the second scene has image deviation, comparing the scenes to generate a comparison result; wherein the content of the first and second substances,
the scene comparison is carried out by carrying out comparison analysis on the second scene data and the scene data to generate comparison data, and a comparison result is determined according to the comparison data; wherein the content of the first and second substances,
the comparative analysis comprises: anti-shake contrast analysis, safety contrast analysis and iterative analysis;
the comparison result comprises: forward results, reverse results, no change results;
performing storage judgment according to the comparison result; wherein the content of the first and second substances,
when the comparison result is a forward result, cloud storage is carried out;
and when the comparison result is a reverse result or a result without change, performing local storage.
As an embodiment of the present technical solution, the safety contrast analysis includes:
scene safety analysis and data acquisition safety analysis; wherein the content of the first and second substances,
the scene safety analysis acquires scene data through an adjustable camera group and transmits the scene data to a camera processor, and the camera processor determines the data category by carrying out data classification on the scene data; wherein the content of the first and second substances,
the data categories include: static data, dynamic data;
performing corresponding security analysis according to the data category to generate a security analysis result; wherein the content of the first and second substances,
the security analysis includes: performing placement safety analysis and mobile safety analysis;
when the safety analysis result is within a preset range, the safety scene is a safety scene, and safety feedback is carried out; wherein the content of the first and second substances,
the safety feedback can be carried out through voice preset by a user;
when the safety analysis result is not in a preset range, an unsafe scene is obtained, and early warning processing is carried out;
the data acquisition safety analysis is used for storing and judging access information of a preset camera protocol; wherein the content of the first and second substances,
when the access information is within a preset safety protocol information range, normally performing cloud storage; otherwise, stopping cloud storage and performing local storage.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention and not to limit the invention. In the drawings:
fig. 1 is a flowchart of an intelligent control method for a camera of a wearable device according to an embodiment of the present invention;
fig. 2 is a flowchart illustrating waking up in an intelligent camera control method for a wearable device according to an embodiment of the present invention;
fig. 3 is a flowchart of a camera code in an intelligent control method for a camera of a wearable device in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
It will be understood that when an element is referred to as being "secured to" or "disposed on" another element, it can be directly on the other element or be indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly or indirectly connected to the other element.
It will be understood that the terms "length," "width," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like, as used herein, refer to an orientation or positional relationship indicated in the drawings that is solely for the purpose of facilitating the description and simplifying the description, and do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and is therefore not to be construed as limiting the invention.
Moreover, it is noted that, in this document, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions, and "a plurality" means two or more unless specifically limited otherwise. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.
According to the invention, the camera group is awakened by setting various control instructions, such as voice, touch, beating and the like, so that the functions of quick recording, second shooting and the like can be realized, and both hands are liberated by voice awakening, so that the camera is suitable for a working scene of double-hand operation; after the camera group is awakened, the corresponding recognition mode is quickly entered according to the awakening mode, then the control strategy is quickly determined according to the recognition information, in the aspect of the control strategy, the strategy level is judged firstly, strategy analysis is carried out according to the corresponding cups, finally the camera group is controlled to collect data, the collected data is encoded and corrected, the safety and the imaging quality of camera collection are improved, different fitting is carried out finally, and the matching performance and the high efficiency of the intelligent camera to the current scene are improved according to the training mode of machine learning.
Example 1:
the embodiment of the invention provides an intelligent control method for a camera of wearable equipment, which comprises the following steps:
step S01: awakening the adjustable camera group through a preset control instruction to acquire scene data within a preset range;
step S02: the image processor analyzes images according to the received scene data and determines image deviation;
step S03: calculating the deviation ratio of the adjustable camera group according to the deviation and a preset image standard database, and determining a second control strategy of the adjustable camera group;
step S04: controlling the adjustable camera group to perform scene data acquisition again according to the second control strategy, determining second scene data, judging whether the second scene data has the image deviation, and taking the second scene data as target scene data when the second scene data has no image deviation;
the working principle of the technical scheme is as follows: in the prior art, the cameras are usually controlled in a single mode, the number of the fixed cameras is usually set in the device, in the using process, the targeted using surface is narrow or even no, and the accuracy and the safety of data transmission cannot be guaranteed by recording data through a common memory card; the invention is shown in fig. 1, firstly awakens from an awakening mode through a plurality of modes, a camera group determines a corresponding identification mode according to the awakening mode, acquires scene data in a preset range, a camera processor performs image analysis according to the received scene data, determines whether image deviation exists or not, if the deviation exists, the deviation rate of the camera group needs to be calculated, the camera coding analysis is completed, a second control strategy is determined, the scene data is collected again according to the second control strategy, the collected scene data is subjected to image deviation judgment again, whether the image deviation exists or not is judged, if the image deviation does not exist, the current scene data is set as target scene data, if the deviation exists, the scene analysis is performed, the second control strategy is determined to control the collection effect of the adjustable camera group, and the safety judgment is performed on the camera collection, determining a storage type, and finishing the storage of the acquired data; in the using process, voice awakening can be set as a common shooting scheme, double-click beating is quick shooting, after the double-click beating, a camera group can quickly respond, a preset range is shot at a preset angle within preset time, the shot data can be encoded and corrected to obtain higher-quality images, and finally the images are contrasted and analyzed to judge the anti-shake property of the quick-shot images and whether iteration is needed or not, and the local storage or cloud storage is judged according to the safety state;
the beneficial effects of the above technical scheme are: the timeliness of camera response in emergency is improved through different awakening modes, and the control strategy determination efficiency of the camera in different scenes and the speed of acquiring scene data are improved through different preset modes; the control strategy is rapidly confirmed through the image pickup processor, the pertinence and the applicability of the control strategy in the current scene are improved, the quality and the safety of image pickup and collection imaging are improved through the judgment of image deviation, the processing capacity of the camera is continuously improved in different use scenes through final contrastive analysis, and the data safety and the safety of the image pickup process are protected.
Example 2:
in one embodiment, the step S01 includes:
setting a control instruction according to the control requirement of a user, and setting a corresponding pre-recognition mode according to the control instruction; wherein the content of the first and second substances,
the control instructions include: a voice control instruction, a fingerprint control instruction and a touch control instruction are shot; wherein the content of the first and second substances,
the voice control instruction comprises: a mandarin control instruction, a dialect control instruction, a foreign language control instruction, a long-pitch control instruction and a short-pitch control instruction;
the pre-recognition mode includes: a monocular identification mode, a binocular identification mode, a rapid shooting mode and a sound tracing mode;
the working principle of the technical scheme is as follows: in the prior art, the camera is generally awakened in a single mode or in multiple modes, but the function of the camera is only limited to awakening; in the above technical solution, a control instruction is set according to the shooting requirement by the user himself, for example: the system comprises voice control, fingerprint control and shooting control, wherein the voice control comprises multiple languages, and in addition, the voice control and the identification mode comprise a monocular identification mode, a binocular identification mode, a rapid shooting mode and a sound tracing mode; the monocular identification mode and the binocular identification mode need to change the number of cameras in a camera group in advance so as to meet the requirement of normal use of the modes; in an emergency situation, a preset foreign language can be used for awakening and is connected with a preset alarm platform to realize transmission;
the beneficial effects of the above technical scheme are: through different control instructions, the awakening scenes of the camera are enriched, and meanwhile, the application range is expanded aiming at the recognition of different languages.
Example 3:
in one embodiment, the step S01 further includes:
the adjustable camera group is awakened according to a pre-recognition mode to acquire scene data within a preset range; wherein the content of the first and second substances,
the adjustable camera group is awakened according to a pre-recognition mode, and the method comprises the following steps:
the method comprises the following steps: the user performs control operation on the adjustable camera group to generate a user control data group;
step two: according to the user control data group and the pre-control data group corresponding to the control instruction, carrying out data category similarity calculation to generate a control data similarity value group, carrying out awakening judgment and determining an awakening result; wherein the content of the first and second substances,
when each similar value in the control data similar value group is within the corresponding preset threshold range, awakening successfully;
when one or more similar values in the control data similar value group are not in the corresponding preset threshold range, the awakening is failed;
step three: after the adjustable camera group is awakened successfully, camera shooting recognition is carried out according to a pre-recognition mode, and scene data are obtained;
the working principle of the technical scheme is as follows: as shown in fig. 2, in the above technical solution, the camera group is wakened up by the control instruction, and after a user wakens up the camera group, operation information of the user is recorded; then, the similarity of the user control data and the pre-control data of the control instruction is sequentially judged from the similarity of the user control data and the pre-control data of the control instruction, the awakening can be successfully awakened only when the similarity of the data of each type meets a preset range, and finally, the detection is firstly carried out according to the identification mode corresponding to the awakening mode to obtain identification data;
the beneficial effects of the above technical scheme are: by judging different awakening modes, the awakening accuracy is improved, and the awakening error condition is reduced.
Example 4:
in one embodiment, the step S02 includes:
the image processor judges the image type according to the received scene data and determines the image type of the scene data; wherein the content of the first and second substances,
the image types include: video data, image data;
when the image type of the scene data is video data, performing video framing to obtain corresponding frame images, performing image deviation analysis, and determining a deviation result;
when the image type of the scene data is image data, performing image deviation analysis to generate deviation data; wherein, the first and the second end of the pipe are connected with each other,
the image deviation analysis comprises: image definition analysis, image integrity analysis and image exposure analysis;
the working principle of the technical scheme is as follows: in the prior art, the collection and the transmission are directly carried out by a camera arranged in a preset condition, so that the mode is fast, but the specific function is monotonous; in the technical scheme, the processor judges the collected scene data and judges whether the scene data is video data or image data, if the scene data is the video data, the scene data is subjected to framing processing to obtain a framed image, if the scene data is the image data, image deviation analysis is directly performed, and analysis can be performed from image definition analysis, image integrity analysis and image exposure analysis;
the beneficial effects of the above technical scheme are: by framing the video data, the comprehensiveness of scene data judgment is improved, and a foundation is laid for improving the efficiency of subsequent strategy analysis.
Example 5:
in one embodiment, the image deviation analysis comprises:
the camera processor extracts image preprocessing data in a preset image database according to the scene data;
calculating the matching degree of the image data based on the image data and the image preprocessing data, carrying out deviation judgment, and determining a deviation judgment result; wherein the content of the first and second substances,
when the matching degree of the image data is greater than or equal to a preset matching degree threshold value, the image has no image deviation;
when the matching degree of the image data is smaller than a preset matching degree threshold value, the image has image deviation;
the working principle of the technical scheme is as follows: calculating the matching degree of the image data by acquiring scene data and corresponding image preprocessing data in a preset database, and judging whether influence deviation exists or not according to the matching degree;
the beneficial effects of the above technical scheme are: by means of deviation analysis of the images, judgment accuracy of the images is improved, and applicability of strategy adjustment is enhanced through deviation judgment.
Example 6:
in one embodiment, the step S03 includes:
the image pickup processor carries out image pickup coding on the deviation image corresponding to the deviation data according to the received deviation data to generate coded imaging; wherein the content of the first and second substances,
the image capture code includes the steps of:
step S100: generating a deviation division area by carrying out linear analysis on the deviation data; wherein the content of the first and second substances,
the deviation dividing region includes: a linear deviation region and a nonlinear deviation region;
step S200: carrying out corresponding phase data modulation analysis according to the deviation dividing region to obtain first coded data, carrying out data classification on the first coded data, and determining the type of the coded data; wherein the content of the first and second substances,
the encoding data category includes: high frequency coded data, low frequency coded data;
step S300: establishing a coding matrix according to the first coding data, and generating a coding image;
the working principle of the technical scheme is as follows: as shown in fig. 3, the image pickup processor generates a coded image by performing image pickup coding on the received scene data; firstly, acquiring linear deviation data of a dynamic target in a preset range, and generating a deviation dividing region, wherein the deviation dividing region comprises the following steps: a linear deviation region and a nonlinear deviation region; then, phase data corresponding to the bit offset difference region is searched for, modulation analysis is performed, and first encoded data is obtained, at this time, data classification needs to be performed on the first encoded data, and the encoded data category includes: high-frequency encoding data and low-frequency encoding data, and finally, establishing an encoding matrix according to the first encoding data to successfully generate encoded imaging;
the beneficial effects of the above technical scheme are: by linear deviation judgment, the analysis efficiency of the dynamic target moving data is improved, and the coding efficiency is improved by classifying the coded data.
Example 7:
in one embodiment, the step S03 further includes:
performing data frequency screening based on the coded imaging to determine high-frequency data; disassembling the coded imaging according to the high-frequency data to generate a coded subgraph; generating a decoding phase diagram by integrating the encoding subgraphs; decoding and recovering according to the decoded phase diagram to obtain first decoding data and generate a decoded image;
the integrated process comprises: subgraph translation processing, subgraph rotation processing and subgraph accumulation processing;
the working principle of the technical scheme is as follows: the high-frequency information of the existing coded imaging is screened out, then the coded imaging is disassembled according to the high-frequency information, a coded sub-image is generated after the disassembly, at the moment, the coded sub-image needs to be processed according to a corresponding preset integration mode, such as mixed operation of translation, rotation and accumulation, a decoding phase diagram is generated, recovery processing is carried out according to a preset recovery comparison table, finally, first decoding data are obtained, and meanwhile, decoding imaging is generated;
the beneficial effects of the above technical scheme are: the pertinence and the decoding efficiency of decoding are improved through different complex integration modes, and a foundation is laid for improving the imaging quality.
Example 8:
the step S03 further includes:
performing deviation calculation through the first decoding data and decoding comparison data in a preset decoding database to generate a deviation rate of the adjustable camera group, and performing deviation rate judgment; wherein, the first and the second end of the pipe are connected with each other,
when the deviation ratio of the adjustable camera group is within a preset threshold value range, strategy deviation correction is not needed;
when the deviation ratio of the adjustable camera group is not within the preset threshold range, performing strategy deviation correction; wherein the content of the first and second substances,
the strategy deviation rectifying comprises the following steps: performing strategy matching through the deviation ratio of the adjustable camera group and a preset deviation comparison table, determining a deviation correction strategy corresponding to the strategy deviation value, performing strategy adjustment according to the deviation correction strategy, and generating a second control strategy; wherein the content of the first and second substances,
the deviation rectifying strategy comprises the following steps: a rotation adjusting strategy for adjusting the camera group, a displacement adjusting strategy for adjusting the camera group and a lighting adjusting strategy for adjusting the camera group;
the working principle of the technical scheme is as follows: after the first decoding data is acquired, whether the first decoding data is accurate or not is judged, if not, the decoding finished product behind the image is calculated by using second decoding data corresponding to the first decoding data in a preset database, a coding deviation value is calculated by comparing the first decoding data with the second decoding data, a coding deviation value is calculated by using the preset basic quantity of the coding deviation value, the deviation rate is judged, if the decoding is qualified, the decoding can be carried out, and then, a correct decoding result can be obtained; if the code is not qualified, the coding correction is needed, a correction strategy is determined, and a correction shooting mode is obtained;
the beneficial effects of the above technical scheme are: the decoding accuracy is improved and the decoding success rate is ensured by calculating the deviation rate of decoded data, and the current coding and decoding are optimized by coding and correcting.
Example 9:
in one embodiment, the step S04 includes:
shooting and collecting through a second control strategy to obtain second scene data; judging whether the second scene data has the image deviation or not, and determining a judgment result; wherein the content of the first and second substances,
when the second scene has no image deviation, taking the second scene data as target scene data;
when the second scene has image deviation, comparing the scenes to generate a comparison result; wherein the content of the first and second substances,
the scene comparison is carried out by carrying out comparison analysis on the second scene data and the scene data to generate comparison data, and a comparison result is determined according to the comparison data; wherein the content of the first and second substances,
the comparative analysis comprises: anti-shake contrast analysis, safety contrast analysis and iterative analysis;
the comparison result comprises: forward results, reverse results, no change results;
performing storage judgment according to the comparison result; wherein the content of the first and second substances,
when the comparison result is a positive result, cloud storage is carried out;
when the comparison result is a reverse result or a result without change, performing local storage;
the working principle of the technical scheme is as follows: when deviation correction is needed, adjusting the current imaging parameters, including: encoding parameters and decoding parameters; then generating a deviation correction acquisition strategy, and acquiring to obtain second scene data; and then performing fitting analysis with the scene data, wherein the fitting analysis comprises the following steps: anti-shake fitting analysis, safety fitting analysis, iterative analysis, confirm the fitting result, include: the forward optimization result, the reverse optimization result and the non-optimization result are very important, the forward optimization result is subjected to cloud storage and local storage, the non-optimization result and the reverse optimization result need to be rapidly recorded in the local storage, and a foundation is laid for subsequent learning training;
the iterative analysis comprises the steps of:
step S11: acquiring a collection data set { x1,x2,…,xnAnd calculating a cost function D corresponding to the acquired data setp
Figure BDA0003550746640000171
Wherein x ispRepresenting the p-th data in the acquired data group, p is variable, and p is more than or equal to 1 and less than or equal to n, xqRepresenting the q-th data in the collected data group, wherein q is a variable and is more than or equal to 1 and less than or equal to n;
step S22: according to the cost function DpCalculating a cost function DpEstablishing an equation set by the gradient momentum in a preset time:
Figure BDA0003550746640000172
wherein the content of the first and second substances,
Figure BDA0003550746640000173
is the first gradient momentum, δIn order to be the second gradient momentum,
Figure BDA0003550746640000174
in order to be the weight of the weight,
Figure BDA0003550746640000175
to bias the weights, τ is the first gradient coefficient, τ "is the second gradient coefficient,
Figure BDA0003550746640000181
for iterating the first gradient momentum, δ *For iteration of the second gradient momentum, mu is a bias value, and d mu is a partial derivative of the bias value;
Figure BDA0003550746640000182
in order to iterate the third gradient momentum,
Figure BDA0003550746640000183
is the first gradient momentum, gamma *In order to iterate the fourth gradient momentum,
Figure BDA0003550746640000184
a first gradient momentum;
step S33: calculating a cost function DpAnd establishing an equation set by the corresponding weight:
Figure BDA0003550746640000185
wherein the content of the first and second substances,
Figure BDA0003550746640000186
is an initial weight, mu' is an initial bias value; σ is a first learning rate, and ω is a smoothing term;
step S44: carrying out iteration judgment according to the weight; wherein the content of the first and second substances,
when the weight value is within a preset range, positive iteration is performed; otherwise, negative iteration is performed;
the beneficial effects of the above technical scheme are: through deviation rectification and parameter adjustment, the accuracy of a camera shooting imaging result is improved, the imaging quality is guaranteed, and the image of the fitting result to the storage mode is judged through fitting analysis, so that the safety of data storage is greatly protected.
Example 10:
in one embodiment, the security contrast analysis comprises:
scene safety analysis and data acquisition safety analysis; wherein the content of the first and second substances,
the scene safety analysis acquires scene data through an adjustable camera group and transmits the scene data to a camera processor, and the camera processor determines the data category by carrying out data classification on the scene data; wherein the content of the first and second substances,
the data categories include: static data, dynamic data;
performing corresponding security analysis according to the data category to generate a security analysis result; wherein the content of the first and second substances,
the security analysis includes: performing placement safety analysis and mobile safety analysis;
when the safety analysis result is within a preset range, the safety scene is a safety scene, and safety feedback is carried out; wherein the content of the first and second substances,
the safety feedback can be carried out through voice preset by a user;
when the safety analysis result is not in the preset range, the scene is an unsafe scene, and early warning processing is carried out;
the data acquisition safety analysis is used for storing and judging access information of a preset camera protocol; wherein, the first and the second end of the pipe are connected with each other,
when the access information is within a preset safety protocol information range, normally performing cloud storage; otherwise, stopping cloud storage and performing local storage;
the working principle of the technical scheme is as follows: in the whole working process of the camera, the safety is extremely important, and the technical scheme considers the safety of a user and the safety of the camera from the two aspects of scene safety and data acquisition safety respectively; scene safety is through the camera current scene data of gathering, then classifies scene data, according to different categories, and the security analysis that carries on that is quick in proper order includes: placing safety analysis and moving safety analysis, if the safety result is safe, feeding back according to a prompt mode preset by a user in advance, and if the safety result is unsafe, performing early warning treatment; in the data acquisition process, the camera shooting protocols are detected respectively, only the protocols within the safety range can continue to carry out data acquisition and storage, otherwise, the standing horse stops the cloud storage and only carries out the storage on the local storage card;
the security analysis further comprises a security analysis; wherein the security analysis comprises the steps of:
step S110: obtaining a secure data set { ρ }12,…,ρmAnd establishing a comparison equation:
Figure BDA0003550746640000191
where ρ isαFor the alpha data, beta in the secure data setiAs a comparison coefficient, ρα-iFor the alpha-i data in the security data set, lambdaαIs an error;
step S120: establishing an error equation according to the comparison equation:
Figure BDA0003550746640000201
wherein eta (i, j) is a binomial distribution obeyed by the ith data and the jth data in the safety data set, and f is a coefficient image equation;
step S130: calculating an error value and carrying out safety judgment; wherein the content of the first and second substances,
when the error value is within a preset range, the state is a safe state; otherwise, the state is unsafe;
the beneficial effects of the above technical scheme are: through scene safety and collection safety, the safety of users and data acquisition and storage is respectively improved, the use scene of the camera is also expanded, safety analysis is carried out in sequence according to different types, and the comprehensiveness and accuracy of the safety analysis are improved.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An intelligent control method for a camera of wearable equipment comprises the following steps:
step S01: awakening the adjustable camera group through a preset control instruction to acquire scene data within a preset range;
step S02: the image processor analyzes images according to the received scene data and determines image deviation;
step S03: calculating the deviation ratio of the adjustable camera group according to the deviation and a preset image standard database, and determining a second control strategy of the adjustable camera group;
step S04: and controlling the adjustable camera group to acquire scene data again according to the second control strategy, determining second scene data, judging whether the second scene data has the image deviation, and taking the second scene data as target scene data when the second scene data has no image deviation.
2. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S01 comprises:
setting a control instruction according to the control requirement of a user, and setting a corresponding pre-recognition mode according to the control instruction; wherein, the first and the second end of the pipe are connected with each other,
the control instructions include: a voice control instruction, a fingerprint control instruction and a touch control instruction are shot; wherein the content of the first and second substances,
the voice control instruction comprises: a mandarin control instruction, a dialect control instruction, a foreign language control instruction, a long-pitch control instruction and a short-pitch control instruction;
the pre-recognition mode includes: the system comprises a monocular identification mode, a binocular identification mode, a rapid shooting mode and a sound tracing mode.
3. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S01 further comprises:
the adjustable camera group is awakened according to a pre-recognition mode to acquire scene data within a preset range; wherein, the first and the second end of the pipe are connected with each other,
the adjustable camera group is awakened according to a pre-recognition mode, and the method comprises the following steps:
the method comprises the following steps: the user performs control operation on the adjustable camera group to generate a user control data group;
step two: according to the user control data group and the pre-control data group corresponding to the control instruction, carrying out data category similarity calculation to generate a control data similarity value group, carrying out awakening judgment and determining an awakening result; wherein the content of the first and second substances,
when each similar value in the control data similar value set is within the corresponding preset threshold range, awakening successfully;
when one or more similar values in the control data similar value set are not in the corresponding preset threshold range, the awakening is failed;
step three: and when the adjustable camera group is awakened successfully, camera shooting recognition is carried out according to a pre-recognition mode, and scene data is obtained.
4. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S02 comprises:
the image processor judges the image type according to the received scene data and determines the image type of the scene data; wherein the content of the first and second substances,
the image types include: video data, image data;
when the image type of the scene data is video data, performing video framing to obtain corresponding frame images, and performing image deviation analysis to determine a deviation result;
when the image type of the scene data is image data, performing image deviation analysis to generate deviation data; wherein the content of the first and second substances,
the image deviation analysis comprises: image definition analysis, image integrity analysis and image exposure analysis.
5. The intelligent control method for the camera of the wearable device according to claim 4, wherein the image deviation analysis comprises:
the camera processor extracts image preprocessing data in a preset image database according to the scene data;
calculating the matching degree of the image data based on the image data and the image preprocessing data, carrying out deviation judgment, and determining a deviation judgment result; wherein the content of the first and second substances,
when the matching degree of the image data is greater than or equal to a preset matching degree threshold value, the image has no image deviation;
and when the matching degree of the image data is smaller than a preset matching degree threshold value, the image has image deviation.
6. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S03 includes:
the image pickup processor carries out image pickup coding on the deviation image corresponding to the deviation data according to the received deviation data to generate coded imaging; wherein the content of the first and second substances,
the image capture code includes the steps of:
step S100: generating a deviation division area by carrying out linear analysis on the deviation data; wherein the content of the first and second substances,
the deviation dividing region includes: a linear deviation region and a nonlinear deviation region;
step S200: carrying out corresponding phase data modulation analysis according to the deviation dividing region to obtain first coded data, carrying out data classification on the first coded data, and determining the type of the coded data; wherein the content of the first and second substances,
the encoding data category includes: high frequency coded data, low frequency coded data;
step S300: and establishing a coding matrix according to the first coded data, and generating a coded image.
7. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S03 further comprises:
performing data frequency screening based on the coded imaging to determine high-frequency data; disassembling the coded imaging according to the high-frequency data to generate a coded subgraph; generating a decoding phase diagram by integrating the encoding subgraphs; decoding and recovering according to the decoded phase diagram to obtain first decoding data and generate a decoded image;
the integrated process comprises: subgraph translation processing, subgraph rotation processing and subgraph accumulation processing.
8. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S03 further comprises:
performing deviation calculation through the first decoding data and decoding comparison data in a preset decoding database to generate a deviation rate of the adjustable camera group, and performing deviation rate judgment; wherein the content of the first and second substances,
when the deviation ratio of the adjustable camera group is within a preset threshold value range, strategy deviation correction is not needed;
when the deviation ratio of the adjustable camera group is not within the preset threshold range, performing strategy deviation correction; wherein, the first and the second end of the pipe are connected with each other,
the strategy deviation rectifying comprises the following steps: strategy matching is carried out through the deviation ratio of the adjustable camera group and a preset deviation comparison table, a deviation rectifying strategy corresponding to the strategy deviation value is determined, strategy adjustment is carried out according to the deviation rectifying strategy, and a second control strategy is generated; wherein the content of the first and second substances,
the deviation rectifying strategy comprises the following steps: the system comprises a rotation adjusting strategy capable of adjusting a camera group, a displacement adjusting strategy capable of adjusting the camera group and a lighting adjusting strategy capable of adjusting the camera group.
9. The intelligent control method for the camera of the wearable device as claimed in claim 1, wherein the step S04 includes:
shooting and collecting through a second control strategy to obtain second scene data; judging whether the second scene data has the image deviation or not, and determining a judgment result; wherein the content of the first and second substances,
when the second scene has no image deviation, taking the second scene data as target scene data;
when the second scene has image deviation, comparing the scenes to generate a comparison result; wherein, the first and the second end of the pipe are connected with each other,
the scene comparison is carried out by carrying out comparison analysis on the second scene data and the scene data to generate comparison data, and a comparison result is determined according to the comparison data; wherein the content of the first and second substances,
the comparative analysis comprises: anti-shake contrast analysis, safety contrast analysis and iterative analysis;
the comparison result comprises: forward results, reverse results, no change results;
performing storage judgment according to the comparison result; wherein the content of the first and second substances,
when the comparison result is a forward result, cloud storage is carried out;
and when the comparison result is a reverse result or a result without change, performing local storage.
10. The intelligent control method for the camera of the wearable device as claimed in claim 9, wherein the security contrast analysis comprises:
scene safety analysis and data acquisition safety analysis; wherein the content of the first and second substances,
the scene safety analysis acquires scene data through an adjustable camera group and transmits the scene data to a camera processor, and the camera processor determines the data category by carrying out data classification on the scene data; wherein, the first and the second end of the pipe are connected with each other,
the data categories include: static data, dynamic data;
performing corresponding security analysis according to the data category to generate a security analysis result; wherein the content of the first and second substances,
the security analysis includes: performing placement safety analysis and mobile safety analysis;
when the safety analysis result is within a preset range, the safety scene is a safety scene, and safety feedback is carried out; wherein, the first and the second end of the pipe are connected with each other,
the safety feedback can be carried out through voice preset by a user;
when the safety analysis result is not in the preset range, the scene is an unsafe scene, and early warning processing is carried out;
the data acquisition safety analysis is used for storing and judging access information of a preset camera protocol; wherein the content of the first and second substances,
when the access information is within a preset safety protocol information range, normally performing cloud storage; otherwise, stopping cloud storage and performing local storage.
CN202210263511.7A 2022-03-17 2022-03-17 Intelligent control method for camera of wearable device Active CN114666501B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210263511.7A CN114666501B (en) 2022-03-17 2022-03-17 Intelligent control method for camera of wearable device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210263511.7A CN114666501B (en) 2022-03-17 2022-03-17 Intelligent control method for camera of wearable device

Publications (2)

Publication Number Publication Date
CN114666501A true CN114666501A (en) 2022-06-24
CN114666501B CN114666501B (en) 2023-04-07

Family

ID=82030263

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210263511.7A Active CN114666501B (en) 2022-03-17 2022-03-17 Intelligent control method for camera of wearable device

Country Status (1)

Country Link
CN (1) CN114666501B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156819A (en) * 2014-08-08 2014-11-19 中国矿业大学(北京) Method and device used for automatically observing and correcting unsafe behaviors at important posts
CN108235816A (en) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 Image recognition method, system, electronic device and computer program product
US10225511B1 (en) * 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
CN110740264A (en) * 2019-10-31 2020-01-31 重庆工商职业学院 intelligent camera data rapid acquisition system and acquisition method
WO2021026855A1 (en) * 2019-08-15 2021-02-18 深圳市大疆创新科技有限公司 Machine vision-based image processing method and device
CN114040094A (en) * 2021-10-25 2022-02-11 青岛海信网络科技股份有限公司 Method and equipment for adjusting preset position based on pan-tilt camera

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104156819A (en) * 2014-08-08 2014-11-19 中国矿业大学(北京) Method and device used for automatically observing and correcting unsafe behaviors at important posts
US10225511B1 (en) * 2015-12-30 2019-03-05 Google Llc Low power framework for controlling image sensor mode in a mobile image capture device
CN108235816A (en) * 2018-01-10 2018-06-29 深圳前海达闼云端智能科技有限公司 Image recognition method, system, electronic device and computer program product
WO2021026855A1 (en) * 2019-08-15 2021-02-18 深圳市大疆创新科技有限公司 Machine vision-based image processing method and device
CN110740264A (en) * 2019-10-31 2020-01-31 重庆工商职业学院 intelligent camera data rapid acquisition system and acquisition method
CN114040094A (en) * 2021-10-25 2022-02-11 青岛海信网络科技股份有限公司 Method and equipment for adjusting preset position based on pan-tilt camera

Also Published As

Publication number Publication date
CN114666501B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US7599549B2 (en) Image processing method, image processing apparatus, and computer readable medium, in which an image processing program is recorded
US8254644B2 (en) Method, apparatus, and program for detecting facial characteristic points
US7542591B2 (en) Target object detecting method, apparatus, and program
US20070189584A1 (en) Specific expression face detection method, and imaging control method, apparatus and program
CN110458829B (en) Image quality control method, device, equipment and storage medium based on artificial intelligence
US7995807B2 (en) Automatic trimming method, apparatus and program
CN108960047B (en) Face duplication removing method in video monitoring based on depth secondary tree
CN107992807B (en) Face recognition method and device based on CNN model
CN113239907B (en) Face recognition detection method and device, electronic equipment and storage medium
CN110287370B (en) Crime suspect tracking method and device based on-site shoe printing and storage medium
CN110633643A (en) Abnormal behavior detection method and system for smart community
CN107871103B (en) Face authentication method and device
CN103593654A (en) Method and device for face location
CN111614897B (en) Intelligent photographing method based on multi-dimensional driving of user preference
CN116958584B (en) Key point detection method, regression model training method and device and electronic equipment
CN112446322B (en) Eyeball characteristic detection method, device, equipment and computer readable storage medium
CN107483813A (en) A kind of method, apparatus and storage device that recorded broadcast is tracked according to gesture
CN113762049B (en) Content identification method, content identification device, storage medium and terminal equipment
CN112700568B (en) Identity authentication method, equipment and computer readable storage medium
CN114666501B (en) Intelligent control method for camera of wearable device
TW202211005A (en) Activity recognition method, activity recognition system, and handwriting identification system
CN112132218B (en) Image processing method, device, electronic equipment and storage medium
CN110263196B (en) Image retrieval method, image retrieval device, electronic equipment and storage medium
CN117291252B (en) Stable video generation model training method, generation method, equipment and storage medium
CN117854156B (en) Training method and related device for feature extraction model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant