OpenAI is rolling out new beta features to ChatGPT Plus members right now. Subscribers have reported that the update includes the ability to upload and work with files, as well as multimodal support. Basically, users won’t have to select modes like Browse with Bing from the GPT-4 drop-down menu; instead, they will guess what they want based on the context.
The new features bring a pinch of the office features offered by its ChatGPT Enterprise plan to the chatbot’s standalone individual subscription. I don’t seem to have the multimodal upgrade on my Plus plan, but I was able to test the Advanced Data Analytics feature, which seems to work as expected. Once the file is sent to ChatGPT, it takes a few moments to digest it before you’re ready to work with it, and then the chatbot can do things like summarize data, answer questions, or generate data visualizations based on prompts.
The chatbot is not limited to just text files. In Threads, a user posted screenshots from a conversation in which they uploaded an image of a capybara and asked ChatGPT, via DALL-E 3, to create a Pixar-style image based on it. They then repeated the concept of the first image by uploading another image, this time of a waving skateboard, and asking you to insert that image. For some reason, put a hat on it too?