Apple’s Ferret neural network has been trained to accept images as requests
[ad_1]
American corporation Apple released Ferret multimodal language model that works with both texts and images. The release of the neural network was not accompanied by announcements from the company, but it attracted the attention of artificial intelligence specialists.
The neural network analyzes the image and determines what objects are on it, and then outlines them with frames. Ferret considers parts of the picture to be a request, and the answer is provided in text form. Users can, for example, ask the neural network to recognize an animal in a photograph, after which it will provide a detailed answer and clarify information about other actions or objects. Ferret is an open format neural network – probably because Apple did not want to release a commercial competitor to ChatGPT.
Previously Apple released update to the iOS 17.2 operating system, which fixed a number of bugs and also added support for spatial video recording for the Vision Pro headset and the Diary application. Also the company testing a new feature through which it will be possible to receive discounts on new subscriptions, provided that the user is already subscribed to one of them.
[ad_2]
Source link