CN116320743A - Object recognition method, device, electronic equipment and computer storage medium - Google Patents

Object recognition method, device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN116320743A
CN116320743A CN202310210820.2A CN202310210820A CN116320743A CN 116320743 A CN116320743 A CN 116320743A CN 202310210820 A CN202310210820 A CN 202310210820A CN 116320743 A CN116320743 A CN 116320743A
Authority
CN
China
Prior art keywords
terminal
calling
zooming
image pickup
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310210820.2A
Other languages
Chinese (zh)
Inventor
曾超然
刘邦哲
王楠
罗阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202310210820.2A priority Critical patent/CN116320743A/en
Publication of CN116320743A publication Critical patent/CN116320743A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes

Abstract

In the method, as a virtual camera in a target application invokes an image pickup device of a terminal, a zoom coefficient for zooming the image pickup device of the terminal can be configured according to focusing parameter information of the image pickup device of the terminal based on the invoking operation, and the zoom coefficient is a parameter for representing invoking the image pickup device with a macro function in the terminal to perform focusing setting; after the zoom coefficient for zooming the camera equipment of the terminal is configured, the camera equipment of the terminal is called to identify the object to be identified based on the zoom coefficient, in the process, the camera equipment can be switched based on the zoom coefficient for representing the camera equipment with the macro function in the calling terminal, so that the camera equipment with the macro function in the terminal can be called to focus, and the object to be identified can be clearly focused when the object to be identified is identified in application.

Description

Object recognition method, device, electronic equipment and computer storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an object recognition method, an object recognition device, an electronic device, and a computer storage medium.
Background
With the rapid development of image technology, image recognition has also become particularly important. In particular, the object to be identified may be scanned in the application, the scanning being actually scanning the object to be identified with a virtual camera in the application.
However, when scanning an object to be identified, it is generally performed under a macro condition, which may cause the scanned object to be identified to be blurred in many cases, that is: the object to be identified cannot be focused, and then the object in the object to be identified cannot be identified, so that how to clearly focus the object to be identified when the object to be identified is identified in application is a technical problem to be solved.
Disclosure of Invention
The application provides an object recognition method for solving the problem of clear focusing on an object to be recognized when the object to be recognized is recognized in application, and simultaneously provides an object recognition device, electronic equipment and a computer storage medium.
The application provides an object identification method, which comprises the following steps:
In response to detecting a calling operation of a virtual camera in a target application to an image pickup device of a terminal, configuring a zoom coefficient for zooming the image pickup device of the terminal according to focusing parameter information of the image pickup device of the terminal; the zoom coefficient is a parameter used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with the micro-distance function for focusing;
and calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient.
Optionally, the method further comprises: configuring an automatic focusing parameter for zooming an image pickup device of the terminal;
the step of calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient comprises the following steps:
and calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient and the automatic focusing parameter.
Optionally, the method further comprises:
judging whether a switch for controlling the macro function is in an open state or not;
and if the switch for controlling the micro-distance function is in an on state, configuring a zoom coefficient for zooming the image pickup device of the terminal according to the focusing parameter information of the image pickup device of the terminal, wherein the zoom coefficient comprises the following components: invoking a first interface to configure a zoom coefficient for zooming the camera equipment of the terminal;
And if the switch for controlling the micro-distance function is in an off state, configuring a zoom coefficient for zooming the image pickup device of the terminal according to the focusing parameter information of the image pickup device of the terminal, wherein the zoom coefficient comprises the following components: and calling a second interface to configure a zoom coefficient for zooming the image pickup device of the terminal.
Optionally, before configuring the zoom coefficient for zooming the image capturing apparatus of the terminal according to the focusing parameter information of the image capturing apparatus of the terminal, the method further includes:
creating a virtual camera in the target application, wherein the virtual camera corresponds to the camera equipment of the terminal;
initializing parameters of the virtual camera to obtain first output result information of the virtual camera;
configuring imaging parameters of imaging equipment of the terminal, and adjusting the first output result information according to the configured imaging parameters to obtain second output result information;
and performing visual conversion on the second output result information to obtain visual result information corresponding to the second output result information.
Optionally, the configuring a zoom factor for zooming the image capturing device of the terminal according to the focusing parameter information of the image capturing device of the terminal includes:
And configuring a zooming coefficient for zooming the virtual camera according to the visualization result information and focusing parameter information of the imaging equipment of the terminal.
Optionally, after configuring the zoom coefficient for zooming the virtual camera, the method further includes:
judging whether the virtual camera adopts a target recognition scene or not at present;
if the target adopted by the virtual camera is the target identification scene, focusing an object to be identified in a manual focusing mode, and configuring manual zooming parameters; and resetting the value of the zoom coefficient if the virtual camera adopts a scene which is not target identification currently.
Optionally, the calling the camera device of the terminal to identify the object to be identified based on the zoom coefficient and the auto-focusing parameter includes:
determining calling sequence information and zooming time information of the image pickup device based on the zooming coefficient and the automatic focusing parameter; the calling sequence information and the zooming time information of the image pickup equipment are zooming information related to zooming of the image pickup equipment of the terminal in the process of identifying the object to be identified;
and calling the camera equipment of the terminal to identify the object to be identified according to the calling sequence information and the zooming time information of the camera equipment.
Optionally, the method further comprises: acquiring system version information of a terminal;
the step of calling the camera equipment of the terminal to identify the object to be identified according to the calling sequence information and the zooming time information of the camera equipment comprises the following steps:
and calling the image pickup equipment of the terminal to identify the object to be identified according to the system version information, the calling sequence information of the image pickup equipment and the zooming time information.
Optionally, the calling the image capturing device of the terminal to identify the object to be identified according to the system version information, the calling sequence information of the image capturing device and the zoom time information includes:
judging whether the system version information meets a first preset condition or not;
if the system version information meets a first preset condition, calling the camera equipment of the terminal to identify an object to be identified based on a preset calling strategy corresponding to the first preset condition, calling sequence information of the camera equipment and the zooming time information;
and if the system version information does not meet the first preset condition, calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to the second preset condition, calling sequence information of the camera equipment and the zooming time information.
Optionally, before determining whether the system version information meets the first preset condition, the method further includes:
judging whether the rear camera equipment is started to identify the object to be identified;
if the post camera equipment is started to identify the object to be identified, executing the step of judging whether the system version information meets a first preset condition;
if the post camera equipment is not started to identify the object to be identified, the camera equipment of the terminal is called to identify the object to be identified according to the system version information, the calling sequence information of the camera equipment and the zooming time information, and the method comprises the following steps: and calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to a second preset condition, calling sequence information of the camera equipment and the zooming time information.
Optionally, the number of zoom coefficients is one less than the number of times of invoking the image capturing apparatus of the terminal.
The application provides an object recognition device, comprising:
a configuration unit, configured to configure a zoom coefficient for zooming an image pickup apparatus of a terminal according to focusing parameter information of the image pickup apparatus of the terminal in response to detecting a call operation of a virtual camera in a target application to the image pickup apparatus of the terminal; the zoom coefficient is a parameter used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with the micro-distance function for focusing;
And the calling unit is used for calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient.
The application provides an electronic device, comprising:
a processor;
and a memory for storing a computer program to be executed by the processor to perform the object recognition method.
The present application provides a computer storage medium storing a computer program to be executed by a processor to perform the above object recognition method.
Compared with the prior art, the embodiment of the application has the following advantages:
the application provides an object identification method, which comprises the following steps: in response to detecting a calling operation of a virtual camera in a target application to an image pickup device of a terminal, configuring a zoom coefficient for zooming the image pickup device of the terminal according to focusing parameter information of the image pickup device of the terminal; the zoom coefficient is a parameter used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with the micro-distance function for focusing; and calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient. Since in the method, when the object to be identified is identified, the virtual camera in the target application invokes the camera of the terminal, the zoom coefficient for zooming the camera of the terminal can be configured according to the focusing parameter information of the camera of the terminal based on the invoking operation, and the zoom coefficient is a parameter for representing that the camera in the invoking terminal is switched to invoke the camera with the macro function to focus; after the zoom coefficient for zooming the camera equipment of the terminal is configured, the camera equipment of the terminal is called to identify the object to be identified based on the zoom coefficient, in the process, the camera equipment can be switched based on the zoom coefficient for representing the camera equipment with the macro function in the calling terminal, so that the macro function of the camera equipment with the macro function in the terminal can be called to focus, and the object to be identified can be clearly focused when the object to be identified is identified in application.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the following description will briefly introduce the drawings that are required to be used in the embodiments or the description of the prior art, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings may also be obtained according to these drawings for a person having ordinary skill in the art.
FIG. 1 is a first scenario diagram of an object recognition method of the present application;
FIG. 1A is a schematic diagram of a second scenario of the object recognition method of the present application;
FIG. 2 is a schematic diagram of a conventional object recognition method;
fig. 3 is a flowchart of an object recognition method according to a first embodiment of the present application;
FIG. 4 is a flow chart for configuring a virtual camera;
FIG. 5 is a flow chart of an adaptation process for adapting an existing configuration virtual camera after adding a macro function;
fig. 6 is an adaptation flow chart of an image pickup apparatus adapted to an existing terminal after adding a macro function;
fig. 7 is a schematic diagram of an object recognition device according to a second embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device according to a third embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is, however, susceptible of embodiment in many other ways than those herein described and similar generalizations can be made by those skilled in the art without departing from the spirit of the application and, therefore, the application is not limited to the specific embodiments disclosed below.
The application provides an object recognition method, an object recognition device, an electronic apparatus, and a computer storage medium. The following describes an object recognition method, an object recognition apparatus, an electronic device, and a computer storage medium, respectively, by specific embodiments.
The object recognition method can be applied to scenes in which virtual cameras are adopted to recognize objects in various applications. For example, when in shopping application, some objects to be identified need to be identified by using a virtual camera in the shopping application, so as to identify the commodities matched with the objects, so that the user orders the commodities matched with the objects to be identified, and the complex process of inputting search information and searching the commodities is avoided; when the virtual camera is used for identifying and shooting the object to be identified, the object to be identified can be shot at a short distance, and the micro-distance function of the camera is needed to be used for clearly focusing the object to be identified when the object to be identified is shot at a short distance.
Focal length: the focal length is the distance from the center point of the lens (i.e., the image capturing apparatus) to the clear image formed on the sensor plane. The focal length of the lens determines the size of the image formed by the object captured by the lens on the sensor. The essence of the focal length is "viewing angle", the shorter the focal length, the larger the viewing angle; the longer the focal length, the narrower the viewing angle.
Focusing: the image is sharpest when the focal plane falls on the imaging plane. If the imaging plane is outside the depth of focus range, the image is blurred. Focusing is sometimes required to obtain clear imaging and either manual or automatic focusing may be used.
Optical zooming: zooming is achieved by means of an optical lens structure. By moving the lens (changing the focal length), the object to be photographed is enlarged and reduced, and the larger the optical zoom magnification is, the farther the object can be photographed. Optical zooming is common in single-lens reflex cameras. In a cellular phone camera, an effect of optical zooming is mainly achieved by switching lenses (image pickup apparatuses).
Digital zooming: the enlargement of a digital photograph, for example, by changing a 10 x 10 pixel size to a 15 x 15 pixel, is seen as a large picture, which is digital and without any optical lens intervention.
Multi-camera device: in order to realize stronger image effect on a terminal (such as a mobile phone), a route of combining multiple image pickup devices is performed, diversified image pickup effects are realized on the mobile phone by integrating multiple image pickup devices with different characteristics on the mobile phone, different image focal segments are handed over to corresponding image pickup devices for processing when a user operates zooming at a shooting interface, and the combination of digital zooming and optical zooming is utilized for supplementing when the focal segments are in transition.
However, when a virtual camera is adopted to identify an object to be identified in an application, a plurality of camera devices of a terminal are assumed, but in the process of identifying the object to be identified by using the virtual camera in the application, only one camera device (default camera device) can be used to identify the object to be identified, and in the process of identifying, the camera devices cannot be switched to each other to identify the object to be identified by using the camera devices, so that focusing cannot be performed by using the macro function of the camera device with the macro function. The reason for this is mainly that in the whole process of identifying the object to be identified by using the virtual camera in the prior art, switching of the image capturing devices is not supported, and then the object to be identified is identified by using a plurality of image capturing devices.
In the application, in order to utilize the micro-focusing function of the image capturing device, in response to detecting the invoking operation of the virtual camera in the target application to the image capturing device of the terminal, according to the focusing parameter information of the image capturing device of the terminal, a zooming coefficient and an automatic focusing parameter for zooming the image capturing device of the terminal are configured; the zoom coefficient and the automatic focusing parameter are parameters used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with a micro-focusing function for focusing; after a zoom coefficient and an automatic focusing parameter for zooming an image pickup apparatus of a terminal are configured, the image pickup apparatus of the terminal is called to recognize an object to be recognized based on the zoom coefficient and the automatic focusing parameter.
In order to facilitate understanding the above object recognition method, please refer to fig. 1, which is a schematic diagram of a first scenario of the object recognition method of the present application. In this scenario, fig. 1 corresponds to a rendering effect diagram that actually invokes the image capturing apparatus of the terminal to recognize the object to be recognized. In order to facilitate comparison of the difference between the recognition effects of the method of the present application and the conventional object recognition method, please refer to fig. 2, which is a schematic diagram of the conventional object recognition method, since the conventional object recognition method cannot utilize the macro function of the camera device of the terminal, the recognition effect of the recognition object in the rendering effect is further blurred.
In the application, the parameters for representing switching the image pickup equipment in the calling terminal to call the image pickup equipment with the macro function for focusing are pre-selected, so that when the object to be identified is identified in the subsequent process, the image pickup equipment of the calling terminal can be switched to identify the object to be identified based on the zoom coefficient and the auto-focusing parameters when the virtual camera in the target application calls the image pickup equipment of the terminal.
In the above scenario, the object to be identified may be an objective object existing in practice, that is: the objective object is identified using the virtual camera. Such as: flowers, birds, fish, insects, people, things, etc. can be identified. Of course, the object to be identified may also be an image.
In order to facilitate understanding the above object recognition method, please refer to fig. 1A, which is a schematic diagram of a second scenario of the object recognition method of the present application. In this scenario, taking an object recognition method performed at a server, which is a computing device for providing services such as data processing and storage for a user terminal, the server may generally refer to a server or a server cluster. User terminals are typically electronic devices that facilitate direct operation by a user.
In the application, specifically, please refer to fig. 1A, a request message provided by a user terminal and used for calling a plurality of image capturing devices to focus under a macro condition is obtained through a server, and then a zoom coefficient used for zooming the image capturing devices of the terminal is configured according to focusing parameter information of the image capturing devices of the terminal based on the request message; the zoom coefficient is a parameter for representing setting for switching the image pickup apparatus in the call terminal to call the image pickup apparatus having the macro function for focusing; then, based on the zoom coefficient, calling the camera equipment of the terminal to identify the object to be identified, and obtaining an identification result of the object to be identified; and finally, providing the identification result of the object to be identified to the user terminal, so that the user terminal renders the identification result of the object to be identified on the page of the target application.
Fig. 1 and fig. 1A described above are diagrams of an application scenario of an object recognition method according to the present application, and the application scenario of the object recognition method is not specifically limited in the embodiments of the present application, but is merely one embodiment of the application scenario of the object recognition method provided in the present application, and the purpose of the application scenario embodiment is to facilitate understanding of the object recognition method provided in the present application, and is not to limit the object recognition method provided in the present application. Other application scenarios of the object recognition method in the embodiment of the present application are not described in detail.
First embodiment
A first embodiment of the present application provides an object recognition method, which is described below with reference to fig. 3. An applicable scenario of the object recognition method may be referred to the above-described scenario embodiment, and for some examples of this embodiment reference is made to the above-described scenario embodiment.
Fig. 3 is a flowchart of an object recognition method according to a first embodiment of the present application.
The object identification method comprises the following steps.
Step S301: in response to detecting a call operation of the virtual camera in the target application to the image pickup apparatus of the terminal, a zoom coefficient for zooming the image pickup apparatus of the terminal is configured according to focusing parameter information of the image pickup apparatus of the terminal.
In this embodiment, in order to enable clear focusing on an object to be identified under a macro condition, when the virtual camera in the target application is detected to call the image pickup apparatus of the terminal, a zoom coefficient for zooming the image pickup apparatus of the terminal is configured according to focusing parameter information of the image pickup apparatus of the terminal. The zoom coefficient is a parameter set for indicating switching of the image pickup apparatus in the call terminal to call the image pickup apparatus having the macro function for focusing. At the same time, an auto-focus parameter for zooming an image pickup apparatus of the terminal is configured.
Among the image pickup apparatuses of the terminal system, focusing parameter information of different image pickup apparatuses is different, some image pickup apparatuses may be wide-angle image pickup apparatuses, some image pickup apparatuses may be ultra-wide-angle image pickup apparatuses, and some image pickup apparatuses may be tele image pickup apparatuses; in order to enable focusing by using the macro function of the image capturing apparatus in the process of recognizing the object to be recognized, a zoom coefficient and an auto-focus parameter for zooming the image capturing apparatus of the terminal may be configured based on the wide-angle function, the ultra-wide-angle function, and the tele function of the image capturing apparatus. The autofocus parameters include autofocus mode and time interval information for autofocus. In practice, an image pickup apparatus having an ultra-wide angle function can provide a better macro function. In this embodiment, the macro function refers to the ability to photograph an object close to the lens, i.e., the close-up photographing ability of the lens.
In the present embodiment, the zoom coefficient, the auto-focus parameter are parameters set for indicating switching of the image pickup apparatus in the call terminal to call the image pickup apparatus having the macro function. The zoom factor, i.e., the focusing factor, is mainly used to change the focal length of the image capturing apparatuses so as to obtain a clear image, for example, when capturing with a plurality of image capturing apparatuses (mainly referred to as lenses) in the terminal, if each image capturing apparatus corresponds to a certain focal length, the image capturing apparatus has an autofocus parameter (including an autofocus mode and time interval information of autofocus) corresponding thereto, and the zoom factor is mainly used to switch and invoke the plurality of image capturing apparatuses. For example, when the focus parameter information corresponding to the first image capturing apparatus is that it has a telephoto capturing function, and when the focus parameter information corresponding to the second image capturing apparatus is that it has an ultra-wide-angle capturing function, setting the zoom coefficient and the auto-focus parameter to enable switching between the two image capturing apparatuses can be achieved in order to enable the micro-focus function.
In the present embodiment, in consideration of utilizing the macro function to the image pickup apparatus, in practice, when the image pickup apparatus of the terminal has the ultra-wide angle photographing function, the macro function can be utilized well.
In general, before the virtual camera invokes the image capturing apparatus of the terminal, the virtual camera also needs to be configured in the target application. In particular, the configuration of the virtual camera may be performed according to a flowchart as illustrated in fig. 4. Fig. 4 is a flow chart for configuring a virtual camera.
First, step S401 is executed: a virtual camera is created in the target application, the virtual camera corresponding to the image capturing device of the terminal.
After that, step S402 is performed: initializing parameters of the virtual camera to obtain first output result information of the virtual camera. This step corresponds to the generation Session AVCaptureSession in the virtual camera configuration, namely: audio and video acquisition session. AV, namely: audio and Video, namely: and (5) audio and video.
After that, step S403 is executed: and configuring the image pickup parameters of the image pickup equipment of the terminal, and adjusting the first output result information according to the configured image pickup parameters to obtain second output result information. This step corresponds to the generation Device AVCaptureDevice in the virtual camera configuration, namely: audio and video capturing apparatuses (i.e., image capturing apparatuses).
After that, step S404 is executed: and performing visual conversion on the second output result information to obtain visual result information corresponding to the second output result information. This step corresponds to the generation Preview AVCaptureVideoPreviewLayer in the virtual camera configuration, namely: and an audio and video acquisition preview layer.
The above-mentioned visual result information is obtained in order to facilitate configuration of a zoom coefficient and an autofocus parameter for zooming the virtual camera, and in practice, in this embodiment, the zoom coefficient and the autofocus parameter for zooming the virtual camera may be configured according to the visual result information and focusing parameter information of the image capturing apparatus of the terminal. The focus parameter information of the image pickup apparatus corresponds to the functions of the telephoto, wide-angle, and super-wide-angle of the image pickup apparatus.
After step S404 is performed, step S405 may be performed: a Session (i.e., session) frame rate, a focus interval, etc. (actually, a zoom coefficient, an auto-focus parameter, which are correspondingly configured to zoom an image pickup apparatus of a terminal) are configured, and then a Preview (i.e., preview) may be added to the view.
In practice, the key steps of the above procedure are Device AVCaptureDevice. In the process of capturing images or videos, the AVCaptureDevice may perform related initialization on the image capturing device and the microphone, and may perform some basic settings on the image capturing device, such as: basic settings for some parameters such as flash, flashlight, focus, exposure, white balance, etc.
The original Device logic is to call the system Device WithMediaType (corresponding to the second interface), and after the terminal system is upgraded to a certain version, the method can not achieve the purpose by using the interface when the multi-camera equipment on the terminal needs to be used for automatic switching. According to the hints of the API (Application Programming Interface), a new AVCaptureDeviceDiscoverySession (corresponding to the first interface) needs to be employed.
For the first interface, there are three types of interfaces: deviceTypes (i.e., device type), mediaType (i.e., media type), position (i.e., location). Wherein MediaType is the media type of the device. Such as audio, text, etc., the present embodiment may select AVMediaTypeVideo. The Position is whether the image pickup apparatus employs a front-end image pickup apparatus or a rear-end image pickup apparatus. In general, a rear-end image pickup apparatus is mainly employed, that is: AVCaptureDevicePositionBack (i.e. rear-mounted audio and video acquisition equipment, i.e. rear-mounted camera equipment).
The DeviceTypes can acquire the content of the image capturing apparatus, and at this time, it is necessary to distinguish and select according to the system version. Since the macro function is mainly utilized in the present application, it is necessary to increase the ultra-wide angle acquisition. In the application, the terminal at least comprises a wide-angle image pickup device and a super-wide-angle image pickup device, and the terminal which does not comprise the super-wide-angle image pickup device is correspondingly adapted.
The following describes the tele, wide, and super wide functions of the image capturing apparatus (i.e., focus parameter information of the image capturing apparatus):
avcapturedevicetypebuiltlnwideangle, i.e.: a wide-angle image pickup apparatus (which is a default image pickup apparatus in a terminal, corresponds to a focal segment of about 28 mm); AVCaptureDeviceTypeBuiltInTelephoto Camera, namely: a tele image capturing apparatus (2 times or 3 times the focal length of the default image capturing apparatus, using avcapturedevice discovery session call); AVCaptureDeviceTypeBuiltInUltraWideCamera, namely: ultra-wide angle image capturing apparatus (0.5 times the focal length of the default image capturing apparatus, using avcapturedevice discovery session invocation); AVCaptureDeviceTypeBuiltInDualCamera, namely: the wide-angle image pickup device and the long-focus image pickup device can be automatically switched, and the AVCaptureDeviceDiscoverySession is used for calling; AVCaptureDeviceTypeBuiltInDualWideCamera, namely: the wide-angle image pickup device and the ultra-wide-angle image pickup device can be automatically switched, and the AVCaptureDeviceDiscoverySession is used for calling; avcapturedevicetypebuiltlntriplycamera, i.e.: the wide-angle image pickup device, the long-focus image pickup device and the ultra-wide-angle image pickup device can be automatically switched, and the AVCaptureDeviceDiscoverySession is used for calling; avcapturedevicetypebuiltlntruedepthcamera, i.e.: an infrared imaging apparatus is capable of acquiring depth data.
The built-in double camera equipment in the terminal supports the following functions: when the zoom factor, the illumination level, and the focus position allow, switching is automatically performed from one image capturing apparatus to another image capturing apparatus. Depth data can be generated by measuring the difference between images captured by the ultra-wide angle and wide angle imaging apparatuses. The photographs are transferred from the composed wide-angle image capturing apparatus and ultra-wide-angle image capturing apparatus by a single photograph capturing request.
When the zoom factor, the illumination level, and the focus position allow, switching is automatically performed from one image capturing apparatus to another image capturing apparatus. Therefore, the imaging apparatus is automatically switched, and the conditions that need to be provided are: the zoom factor, the light level, and the focus position are within the range required by the end system.
The automatic focusing adopted by ordering the scene shot by the application, namely the focus position does not need to be actively set, and focusing can be carried out according to the position of the shot object. The light level is used under normal environment. So long as the zoom factor (videozoom factor) is determined.
The zoom coefficient is a value for controlling clipping and enlargement of an image captured by an AVCaptureDevice (image pickup apparatus), and is a coefficient such that a value of 2.0 doubles the size of an image main body (and halves the field of view), for example. The allowable value ranges from 1.0 (full view) to the value corresponding to the attribute videomaxzoom factor (i.e., the maximum video scaling factor, i.e., the maximum zoom factor) of the active format AVCaptureDeviceFormat (i.e., the format of the audio and video capture device).
The AVCaptureDevice achieves a zoom effect by cropping around the center of the image captured by the sensor. At low scaling factors, the cropped image is equal to or larger than the output size. At higher scaling factors, the device must scale the cropped image to the output size, resulting in reduced image quality. The videozoom factor up scalethreshold attribute of the active format AVCaptureDeviceFormat indicates the factor to be magnified.
Therefore, in the present embodiment, in order to automatically switch (automatic switching between the wide-angle image capturing apparatus and the ultra-wide-angle image capturing apparatus) using the above-described multi-image capturing apparatus, it is necessary to set a zoom coefficient, an autofocus parameter.
In practice, the conditions to be provided for the image capturing apparatus to realize automatic switching are: the zoom factor, the illumination level, and the focus position are within the system-required range. Because the target application shoots the scene and adopts automatic focusing, namely the focus position does not need to be actively set, focusing can be carried out according to the position of the main body to be identified. The light level is used under normal environment. So long as the zoom factor (video zoom factor, i.e., the zoom factor) is determined.
The zoom coefficient is a value for controlling clipping and magnification of an image of the AVCaptureDevice, which is a coefficient; for example, when the value is 2.0, the size of the image main body is doubled (the field of view is halved). In practice, when setting the zoom factor, the value may range from 1.0 (full field of view) to the value corresponding to the attribute videomaxzoom factor (i.e., maximum video scaling factor) of the active format AVCaptureDeviceFormat (i.e., format of the audio and video capture device).
There are mainly two implementations of setting and verifying the zoom factor:
in the first embodiment, by using the zoom tool of the sweep of the target application (may refer to a subscription application) page, after the Device is acquired using the avcapturedevicetypebuild indduty camera, according to the experimental conclusion of the video stream preview content case where the video zoom factor is in the range of 1 to 3.5, in the case where the sum zoom factor of the automatically set focus position is 2 times, switching of the image capturing apparatus occurs: the video is switched from the wide-angle image pickup apparatus to the super-wide-angle image pickup apparatus at 3 seconds, and from the super-wide-angle image pickup apparatus to the wide-angle image pickup apparatus at 9 seconds.
The above method adopts a parameter adjustment verification method, and as a second embodiment, the method actually relates to the taking of the video zoom factor value in the API of the AVCaptureDevice: the field of view of one of the image capturing apparatuses matches the entire field of view of the next image capturing apparatus, and the number of video scaling factors for switching of the image capturing apparatuses is always one less than the number of attributes, and the order of the scaling factors is the same as the order of the image capturing apparatuses listed in the attributes. The number of the attributes may correspond to the number of times the image capturing apparatus is called. The conclusion of the first embodiment is the same as that of the second embodiment by debugging and setting the corresponding value of the attribute, but the second embodiment is more adaptive.
In a second embodiment, after setting the video zoom factor from Session as the value obtained in the second embodiment, setting an auto focus, and setting a proper focusing time interval, comparing recorded videos of the photographed object by using a terminal system photographing device and a virtual photographing device in a target application in the same scene, finding that the two recorded videos are switched in the same point (all switched from a wide-angle photographing device to a super-wide-angle photographing device), and simultaneously ensuring that the content of the visual field is consistent after the switching, thereby verifying the feasibility of the video zoom factor assignment mode.
In this embodiment, after the macro function is added, the existing process of identifying the target application by using the virtual camera is affected, so as to ensure the stability of the target application in the process of identifying the target application by using the virtual camera. The switch configuration needs to be made, namely: whether a switch for controlling the macro function is in an on state or not needs to be judged; meanwhile, whether the object is identified by using the target identification scene in the target application is considered, and the target identification scene can be specifically obtained by utilizing a sweeping function in the target application, so that the identification scene is also subjected to relevant adaptation.
Specifically, please refer to fig. 5, which is a flowchart of an adaptation process for adapting an existing configuration virtual camera after adding a macro function.
First, step S501 is executed: creating a virtual camera in the target application; after that, step S502 is performed: judging whether a switch for controlling the macro function is in an open state or not; if yes, step S503 is executed: calling a first interface; if not, step S504 is performed: invoking a second interface; the step S503 or the step S504 is to call an appropriate interface to configure a zoom coefficient and an auto-focus parameter for zooming the image pickup apparatus of the terminal.
After step S503 or step S504 is performed, step S505 is performed: the first interface or the second interface is used for configuring a zoom coefficient and an automatic focusing parameter for zooming the imaging equipment of the terminal.
After that, step S506 is performed: judging whether the current virtual camera adopts a target recognition scene or not; if yes, step S507 is executed: focusing an object to be identified by using a manual focusing mode, and configuring manual zooming parameters; if not, then step S508 is performed: the value of the zoom factor is reset.
In the above procedure, the manual zoom factor is used in order to cause a change in the field of view of the object in the object recognition scene. If the object recognition scene is not adopted, resetting the value of the zoom coefficient to use the reset value of the zoom coefficient to recognize the object in the next round of object recognition scene.
Step S302: based on the zoom coefficient, the camera equipment of the calling terminal identifies the object to be identified.
In practice, based on the zoom factor, the image capturing device of the calling terminal identifies the object to be identified, which may mean: and calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient and the automatic focusing parameter.
In this embodiment, as an implementation manner of identifying an object to be identified by the image capturing apparatus of the calling terminal based on the zoom coefficient and the auto-focus parameter, there may be: firstly, determining calling sequence information and zooming time information of final camera equipment based on a zooming coefficient and an automatic focusing parameter; and then, according to the calling sequence information and the zooming time information of the image pickup equipment, calling the image pickup equipment of the terminal to identify the object to be identified. The call sequence information and the zooming time information of the image pickup apparatus are zooming information related to zooming of the image pickup apparatus of the terminal in the process of identifying the object to be identified.
For example, when the set zoom coefficient is 1.2 and the auto-focus time is 3 seconds of the recognition image, then the image is recognized using the wide-angle image pickup apparatus at 0-3 seconds of the recognition image; when the 3 rd second, switching to the use of the ultra-wide angle image pickup device to recognize the image, and expanding the recognition visual field range to 1.2 times; the above-described call sequence of the image pickup apparatuses involved in zooming in the process of recognizing the object to be recognized is to use the wide-angle image pickup apparatus first and then use the ultra-wide-angle image pickup apparatus.
In the present embodiment, the number of zoom coefficients is one less than the number of times of the image capturing apparatus of the invoked terminal. For example, when the number of times of the image capturing apparatuses of the terminal to be invoked is two, the wide-angle image capturing apparatus is invoked for the first time, and the ultra-wide-angle image capturing apparatus is invoked for the second time, only one zoom factor of 1.2 is required to be set.
Mention is made above of: for the terminal without the ultra-wide angle camera equipment, corresponding adaptation is carried out, and the adaptation mainly considers different terminal models, namely, whether the object identification method can be utilized under the condition of containing a single camera equipment, two camera equipment and three camera equipment, and for some inapplicable situations, the identification method of the terminal to the object is adjusted through the system version information.
First, a system corresponding to the terminal acquires what type of image capturing apparatus is adopted (i.e., what type of image capturing apparatus is contained), and sequentially calls the image capturing apparatuses according to the imported DeviceTypes list. That is, for adaptation of the terminal type, after entering the appropriate entry, the system can confirm whether the object recognition method is applicable. Secondly, since the partial enumeration value of DeviceTypes is limited, particularly for avcapturedevicetypebuiltlndulwidedecamera, certain system version information needs to be satisfied, so that adaptation of the system version is required.
Since the imaging apparatus of some terminals may not include an ultra-wide angle imaging apparatus, in order to adapt these cases, in the present embodiment, it further includes: and obtaining the system version information of the terminal. In practice, based on the system version information of the terminal, it can be judged whether or not the imaging apparatus of the terminal includes an ultra-wide angle imaging apparatus.
Therefore, as a way of identifying an object to be identified by the image pickup apparatus of the call terminal according to the call sequence information and the zoom time information of the image pickup apparatus, there may be: and calling the camera equipment of the terminal to identify the object to be identified according to the system version information, the calling sequence information and the zooming time information of the camera equipment.
Specifically, please refer to fig. 6, which is an adaptation flowchart of an image capturing apparatus adapted to an existing terminal after adding a macro function.
First, step S601 is executed: acquiring a calling sequence of the created image pickup device; after that, step S602 is performed: and judging whether the rear camera equipment is started to identify the object to be identified.
If the determination result of step S602 is yes, step S603 is executed: judging whether the system version information meets a first preset condition or not; if the determination result of step S603 is yes, step S604 is executed: calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to a first preset condition, calling sequence information and zooming time information of the camera equipment; if the determination result in step S603 is no, step S605 is executed: and calling the image pickup equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to the second preset condition, calling sequence information of the image pickup equipment and zooming time information.
Meanwhile, when the determination result in step S602 is no, step S605 is also executed: and calling the image pickup equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to the second preset condition, calling sequence information of the image pickup equipment and zooming time information.
For example, assuming that the first preset condition is satisfied as satisfying a certain system version, the preset call policy corresponding to the first preset condition may be call of a default image pickup apparatus (main photographing, such as a wide-angle image pickup apparatus) and an ultra-wide-angle image pickup apparatus (the default image pickup apparatus and the ultra-wide-angle image pickup apparatus may refer to rear-mounted image pickup apparatuses); the preset call policy corresponding to the second preset condition may be call of a normal image capturing apparatus (may be a front image capturing apparatus, for example). In fact, when the system version information does not meet the first preset condition, the system version information can be considered to meet the second preset condition; when the system version information satisfies the second preset condition, the ultra-wide angle image pickup device is not arranged in the image pickup device, and the macro function cannot be used for focusing, so that the object is identified only by using the common image pickup device in the process.
The application provides an object identification method, which comprises the following steps: in response to detecting a call operation of a virtual camera in a target application to an image pickup device of a terminal, configuring a zoom coefficient for zooming the image pickup device of the terminal according to focusing parameter information of the image pickup device of the terminal; the zoom coefficient is a parameter for representing setting for switching the image pickup apparatus in the call terminal to call the image pickup apparatus having the macro function for focusing; based on the zoom coefficient, the camera equipment of the calling terminal identifies the object to be identified. Since in the method, when the object to be identified is identified, the virtual camera in the target application invokes the camera of the terminal, the zoom coefficient for zooming the camera of the terminal can be configured according to the focusing parameter information of the camera of the terminal based on the invoking operation, and the zoom coefficient is a parameter for representing that the camera in the invoking terminal is switched to invoke the camera with the macro function to focus; after the zoom coefficient for zooming the camera equipment of the terminal is configured, the camera equipment of the terminal is called to identify the object to be identified based on the zoom coefficient, in the process, the camera equipment can be switched based on the zoom coefficient for representing the camera equipment with the macro function in the calling terminal, so that the macro function of the camera equipment with the macro function in the terminal can be called to focus, and the object to be identified can be clearly focused when the object to be identified is identified in application.
Second embodiment
The second embodiment of the present application also provides an object recognition apparatus corresponding to the object recognition method provided in the first embodiment of the present application. Since the device embodiment is substantially similar to the first embodiment, the description is relatively simple, and reference is made to the partial description of the first embodiment for relevant points. The device embodiments described below are merely illustrative.
Fig. 7 is a schematic diagram of an object recognition device according to a second embodiment of the present application.
The object recognition apparatus 700 includes:
a configuration unit 701, configured to configure a zoom coefficient for zooming an image capturing apparatus of a terminal according to focusing parameter information of the image capturing apparatus of the terminal in response to detecting a call operation of a virtual camera in a target application to the image capturing apparatus of the terminal; the zoom coefficient is a parameter used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with the micro-distance function for focusing;
and the calling unit 702 is used for calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient.
Optionally, the configuration unit is further configured to: configuring an automatic focusing parameter for zooming an image pickup device of the terminal;
The calling unit is specifically configured to:
and calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient and the automatic focusing parameter.
Optionally, the method further comprises: a first judgment unit; the first judging unit is specifically configured to:
judging whether a switch for controlling the macro function is in an open state or not;
the configuration unit is specifically configured to: if the judgment result of the first judgment unit is that the switch for controlling the micro-distance function is in an on state, a first interface is called to configure a zoom coefficient for zooming the camera equipment of the terminal;
and if the judgment result of the first judgment unit is that the switch for controlling the micro-distance function is in the off state, calling a second interface to configure a zoom coefficient for zooming the image pickup equipment of the terminal.
Optionally, the method further comprises: a virtual camera creation unit and a virtual camera parameter setting unit;
the virtual camera creation unit is specifically configured to: creating a virtual camera in the target application, the virtual camera corresponding to the camera of the terminal, before configuring a zoom factor for zooming the camera of the terminal according to focus parameter information of the camera of the terminal;
The virtual camera parameter setting unit is specifically configured to: initializing parameters of the virtual camera to obtain first output result information of the virtual camera; configuring imaging parameters of imaging equipment of the terminal, and adjusting the first output result information according to the configured imaging parameters to obtain second output result information; and performing visual conversion on the second output result information to obtain visual result information corresponding to the second output result information.
Optionally, the configuration unit is specifically configured to:
and configuring a zooming coefficient for zooming the virtual camera according to the visualization result information and focusing parameter information of the imaging equipment of the terminal.
Optionally, the method further comprises: the second judging unit and the subsequent processing unit;
the second judging unit is used for judging whether the virtual camera adopts a target identification scene or not currently after configuring a zooming coefficient for zooming the virtual camera;
the subsequent processing unit is specifically configured to: if the target adopted by the virtual camera is the target identification scene, focusing an object to be identified in a manual focusing mode, and configuring manual zooming parameters; and resetting the value of the zoom coefficient if the virtual camera adopts a scene which is not target identification currently.
Optionally, the calling unit is specifically configured to:
determining calling sequence information and zooming time information of the image pickup device based on the zooming coefficient and the automatic focusing parameter; the calling sequence information and the zooming time information of the image pickup equipment are zooming information related to zooming of the image pickup equipment of the terminal in the process of identifying the object to be identified;
and calling the camera equipment of the terminal to identify the object to be identified according to the calling sequence information and the zooming time information of the camera equipment.
Optionally, the method further comprises: a system version information obtaining unit;
the system version information obtaining unit is specifically configured to: acquiring system version information of a terminal;
the calling unit is specifically configured to:
and calling the image pickup equipment of the terminal to identify the object to be identified according to the system version information, the calling sequence information of the image pickup equipment and the zooming time information.
Optionally, the calling unit is specifically configured to:
judging whether the system version information meets a first preset condition or not;
if the system version information meets a first preset condition, calling the camera equipment of the terminal to identify an object to be identified based on a preset calling strategy corresponding to the first preset condition, calling sequence information of the camera equipment and the zooming time information;
And if the system version information does not meet the first preset condition, calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to the second preset condition, calling sequence information of the camera equipment and the zooming time information.
Optionally, the method further comprises: a third judgment unit;
the third judging unit is used for judging whether the post-camera equipment is started to identify the object to be identified before judging whether the system version information meets a first preset condition;
if the judging result of the third judging unit is that the rear-mounted camera equipment is started to identify the object to be identified, the calling unit is specifically used for: executing the step of judging whether the system version information meets a first preset condition;
if the judging result of the third judging unit is that the rear camera equipment is not started to identify the object to be identified, the calling unit is specifically configured to: and calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to a second preset condition, calling sequence information of the camera equipment and the zooming time information.
Optionally, the number of zoom coefficients is one less than the number of times of invoking the image capturing apparatus of the terminal.
Third embodiment
The third embodiment of the present application also provides an electronic device corresponding to the method of the first embodiment of the present application.
Fig. 8 is a schematic diagram of an electronic device according to a third embodiment of the present application.
In this embodiment, an optional hardware structure of the electronic device 800 may be as shown in fig. 8, including: at least one processor 801, at least one memory 802, and at least one communication bus 805; the memory 802 includes a program 803 and data 804.
Bus 805 may be a communication device that transfers data between components within electronic device 800, such as an internal bus (e.g., a CPU-memory bus, processor central processing unit, CPU for short), an external bus (e.g., a universal serial bus port, a peripheral component interconnect express port), and so forth.
In addition, the electronic device further includes: at least one network interface 806, at least one peripheral interface 807. The network interface 806 to provide wired or wireless communication with an external network 808 (e.g., the internet, an intranet, a local area network, a mobile communication network, etc.); in some embodiments, the network interface 806 may include any number of network interface controllers (English: network interface controller, NIC for short), radio Frequency (RF) modules, transponders, transceivers, modems, routers, gateways, any combination of wired network adapters, wireless network adapters, bluetooth adapters, infrared adapters, near field communication (English: near Field Communication, NFC) adapters, cellular network chips, and the like.
Peripheral interface 807 is used to connect with peripherals, such as peripheral 1 (809 in FIG. 8), peripheral 2 (810 in FIG. 8), and peripheral 3 (811 in FIG. 8). Peripherals, i.e., peripheral devices, which may include, but are not limited to, cursor control devices (e.g., mice, touchpads, or touchscreens), keyboards, displays (e.g., cathode ray tube displays, liquid crystal displays). A display or light emitting diode display, a video input device (e.g., a video camera or an input interface communicatively coupled to a video archive), etc.
The processor 801 may be a CPU or a specific integrated circuit ASIC (Application Specific Integrated Circuit) or one or more integrated circuits configured to implement embodiments of the present application.
The memory 802 may comprise a high-speed RAM (collectively, random Access Memory, random access memory) memory, or may further comprise a non-volatile memory (non-volatile memory), such as at least one disk memory.
The processor 801 calls programs and data stored in the memory 802 to execute the method according to the first embodiment of the present application.
Fourth embodiment
The fourth embodiment of the present application also provides a computer storage medium storing a computer program that is executed by a processor to perform the method of the first embodiment of the present application, corresponding to the method of the first embodiment of the present application.
While the preferred embodiment has been described, it is not intended to limit the invention thereto, and any person skilled in the art may make variations and modifications without departing from the spirit and scope of the present invention, so that the scope of the present invention shall be defined by the claims of the present application.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory. The Memory may include volatile Memory, random Access Memory (RAM), and/or nonvolatile Memory in a computer-readable medium, such as Read-Only Memory (ROM) or flash RAM. Memory is an example of computer-readable media.
1. Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change Memory (English: phase change Memory; PRAM for short), static random access Memory (English: static Random Access Memory; SRAM for short), dynamic random access Memory (English: dynamic Random Access Memory; DRAM for short), other types of Random Access Memory (RAM), read-Only Memory (ROM), electrically erasable programmable read-Only Memory (EEPROM for short), flash Memory or other Memory technology, read-Only optical disk read-Only Memory (English: compact Disc Read-Only Memory; CD-ROM for short), digital versatile disks (English: digital versatile disc; DVD for short) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable storage media (non-transitory computer readable storage media), such as modulated data signals and carrier waves.
2. It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
It should be noted that, in the embodiments of the present application, the use of user data may be involved, and in practical applications, user specific personal data may be used in the schemes described herein within the scope allowed by applicable legal regulations in the country where the applicable legal regulations are met (for example, the user explicitly agrees to the user to actually notify the user, etc.).

Claims (14)

1. An object recognition method, comprising:
in response to detecting a calling operation of a virtual camera in a target application to an image pickup device of a terminal, configuring a zoom coefficient for zooming the image pickup device of the terminal according to focusing parameter information of the image pickup device of the terminal; the zoom coefficient is a parameter used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with the micro-distance function for focusing;
And calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient.
2. The method as recited in claim 1, further comprising: configuring an automatic focusing parameter for zooming an image pickup device of the terminal;
the step of calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient comprises the following steps:
and calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient and the automatic focusing parameter.
3. The method as recited in claim 1, further comprising:
judging whether a switch for controlling the macro function is in an open state or not;
and if the switch for controlling the micro-distance function is in an on state, configuring a zoom coefficient for zooming the image pickup device of the terminal according to the focusing parameter information of the image pickup device of the terminal, wherein the zoom coefficient comprises the following components: invoking a first interface to configure a zoom coefficient for zooming the camera equipment of the terminal;
and if the switch for controlling the micro-distance function is in an off state, configuring a zoom coefficient for zooming the image pickup device of the terminal according to the focusing parameter information of the image pickup device of the terminal, wherein the zoom coefficient comprises the following components: and calling a second interface to configure a zoom coefficient for zooming the image pickup device of the terminal.
4. The method according to claim 2, further comprising, before configuring a zoom coefficient for zooming the image pickup apparatus of the terminal according to the focus parameter information of the image pickup apparatus of the terminal:
creating a virtual camera in the target application, wherein the virtual camera corresponds to the camera equipment of the terminal;
initializing parameters of the virtual camera to obtain first output result information of the virtual camera;
configuring imaging parameters of imaging equipment of the terminal, and adjusting the first output result information according to the configured imaging parameters to obtain second output result information;
and performing visual conversion on the second output result information to obtain visual result information corresponding to the second output result information.
5. The method according to claim 4, wherein the configuring the zoom coefficient for zooming the image capturing apparatus of the terminal according to the focusing parameter information of the image capturing apparatus of the terminal includes:
and configuring a zooming coefficient for zooming the virtual camera according to the visualization result information and focusing parameter information of the imaging equipment of the terminal.
6. The method of claim 5, further comprising, after configuring the zoom factor for zooming the virtual camera:
judging whether the virtual camera adopts a target recognition scene or not at present;
if the target adopted by the virtual camera is the target identification scene, focusing an object to be identified in a manual focusing mode, and configuring manual zooming parameters; and resetting the value of the zoom coefficient if the virtual camera adopts a scene which is not target identification currently.
7. The method according to claim 2, wherein the invoking the image capturing device of the terminal to recognize the object to be recognized based on the zoom coefficient and the auto-focus parameter comprises:
determining calling sequence information and zooming time information of the image pickup device based on the zooming coefficient and the automatic focusing parameter; the calling sequence information and the zooming time information of the image pickup equipment are zooming information related to zooming of the image pickup equipment of the terminal in the process of identifying the object to be identified;
and calling the camera equipment of the terminal to identify the object to be identified according to the calling sequence information and the zooming time information of the camera equipment.
8. The method as recited in claim 7, further comprising: acquiring system version information of a terminal;
the step of calling the camera equipment of the terminal to identify the object to be identified according to the calling sequence information and the zooming time information of the camera equipment comprises the following steps:
and calling the image pickup equipment of the terminal to identify the object to be identified according to the system version information, the calling sequence information of the image pickup equipment and the zooming time information.
9. The method according to claim 8, wherein the calling the image capturing apparatus of the terminal to recognize the object to be recognized based on the system version information, the calling order information of the image capturing apparatus, and the zoom time information, comprises:
judging whether the system version information meets a first preset condition or not;
if the system version information meets a first preset condition, calling the camera equipment of the terminal to identify an object to be identified based on a preset calling strategy corresponding to the first preset condition, calling sequence information of the camera equipment and the zooming time information;
and if the system version information does not meet the first preset condition, calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to the second preset condition, calling sequence information of the camera equipment and the zooming time information.
10. The method of claim 9, further comprising, prior to determining whether the system version information satisfies a first preset condition:
judging whether the rear camera equipment is started to identify the object to be identified;
if the post camera equipment is started to identify the object to be identified, executing the step of judging whether the system version information meets a first preset condition;
if the post camera equipment is not started to identify the object to be identified, the camera equipment of the terminal is called to identify the object to be identified according to the system version information, the calling sequence information of the camera equipment and the zooming time information, and the method comprises the following steps: and calling the camera equipment of the terminal to identify the object to be identified based on a preset calling strategy corresponding to a second preset condition, calling sequence information of the camera equipment and the zooming time information.
11. The method according to claim 1, wherein the number of zoom coefficients is one less than the number of times the image capturing apparatus of the terminal is invoked.
12. An object recognition apparatus, comprising:
a configuration unit, configured to configure a zoom coefficient for zooming an image pickup apparatus of a terminal according to focusing parameter information of the image pickup apparatus of the terminal in response to detecting a call operation of a virtual camera in a target application to the image pickup apparatus of the terminal; the zoom coefficient is a parameter used for representing that the camera equipment in the calling terminal is switched to call the camera equipment with the micro-distance function for focusing;
And the calling unit is used for calling the camera equipment of the terminal to identify the object to be identified based on the zoom coefficient.
13. An electronic device, comprising:
a processor;
a memory for storing a computer program to be run by a processor for performing the method of any one of claims 1-11.
14. A computer storage medium, characterized in that the computer storage medium stores a computer program, which is executed by a processor, for performing the method of any of claims 1-11.
CN202310210820.2A 2023-02-28 2023-02-28 Object recognition method, device, electronic equipment and computer storage medium Pending CN116320743A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310210820.2A CN116320743A (en) 2023-02-28 2023-02-28 Object recognition method, device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310210820.2A CN116320743A (en) 2023-02-28 2023-02-28 Object recognition method, device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN116320743A true CN116320743A (en) 2023-06-23

Family

ID=86825171

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310210820.2A Pending CN116320743A (en) 2023-02-28 2023-02-28 Object recognition method, device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN116320743A (en)

Similar Documents

Publication Publication Date Title
CN111373727B (en) Shooting method, device and equipment
WO2021073331A1 (en) Zoom blurred image acquiring method and device based on terminal device
US10951833B2 (en) Method and device for switching between cameras, and terminal
CN107950018B (en) Image generation method and system, and computer readable medium
JP4872797B2 (en) Imaging apparatus, imaging method, and imaging program
KR102229811B1 (en) Filming method and terminal for terminal
WO2017008353A1 (en) Capturing method and user terminal
US20180007292A1 (en) Imaging device, imaging method, and image processing device
US20140071303A1 (en) Processing apparatus, processing method, and program
TWI720445B (en) Method and device of image fusion in camera device
WO2019011079A1 (en) Method and apparatus for inhibiting aec jump, and terminal device
CN109923850B (en) Image capturing device and method
US9167150B2 (en) Apparatus and method for processing image in mobile terminal having camera
US20140071302A1 (en) Processing apparatus, processing method, and program
CN111970437B (en) Text shooting method, wearable device and storage medium
US7684688B2 (en) Adjustable depth of field
WO2018196854A1 (en) Photographing method, photographing apparatus and mobile terminal
CN113473018B (en) Video shooting method and device, shooting terminal and storage medium
CN116320743A (en) Object recognition method, device, electronic equipment and computer storage medium
US8319838B2 (en) Method for enabling auto-focus function, electronic device thereof, recording medium thereof, and computer program product using the method
TW201911853A (en) Dual-camera image pick-up apparatus and image capturing method thereof
JP6645711B2 (en) Image processing apparatus, image processing method, and program
WO2019134513A1 (en) Shot focusing method, device, storage medium, and electronic device
JP5182395B2 (en) Imaging apparatus, imaging method, and imaging program
CN114339017B (en) Distant view focusing method, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination