Could I bum one of your 100 allotted Deep Research tasks this month to see if it'd help with something I'm working on and would be worth the $200 subscription?
This exceeded my expectations and now puts me in the uncomfortable position of really having to consider paying $200/month for a Pro subscription.
I'd read criticism of Deep Research that its default output tended toward generic, padded overviews - as if it were an undergrad trying to hit a word count on a term paper. (Who among us ...) I didn't have that problem with this example at least. Perhaps it's because my prompt was quite explicit in saying what questions I wanted answers, what type of sources would be useful, and even a few examples of the sort of judgement calls I wanted DR to make. Separately I'd read that DR seemed to lose focus or slip in quality if the prompt you provided had too many parts: again on the basis of this example I'd say the more guidance, the better.
As a side note, this is much better than the results from various Gemini models (1.5 Deep Research, 2.0 Pro Experimental and 2.0 Flash Thinking). I have been relying on those because of price (free in AI studio), long context (not that I really need 2 million tokens but it feels good knowing if I upload a PDF that I'm not going to break something), and generally high quality.
The links it cites are appropriate, if not all that "deeply" hidden. They're what I'd find on my own through five minutes of Googling, but then again my problem when researching there's no such thing as five minutes of Googling - only five hours of progressively more distracted tab opening that will never get synthesized.
That to me is the main takeaway. It's seductive to tell yourself that the bottleneck when working on a problem is that you don't have enough information, but with or without AI assistants that's very rarely true. Realistically, this report from Deep Research is 95% as good as what I'd come up with on my own. Which doesn't mean I'm now a subject matter expert, only that it's harder for me to pretend that the way to achieve that expertise would be reading one more paper on the subject.
Could I bum one of your 100 allotted Deep Research tasks this month to see if it'd help with something I'm working on and would be worth the $200 subscription?
sounds fun! absolutely
Excellent, thank you! I'll message with prompt details.
It sounds like OAI really kicked up the 4d3d3d3 on this latest release and I'm looking forward to loading my first sequence.
https://chatgpt.com/share/67a7feff-afcc-800d-8caf-df9379dd58b4
curious how it looks to you!
This exceeded my expectations and now puts me in the uncomfortable position of really having to consider paying $200/month for a Pro subscription.
I'd read criticism of Deep Research that its default output tended toward generic, padded overviews - as if it were an undergrad trying to hit a word count on a term paper. (Who among us ...) I didn't have that problem with this example at least. Perhaps it's because my prompt was quite explicit in saying what questions I wanted answers, what type of sources would be useful, and even a few examples of the sort of judgement calls I wanted DR to make. Separately I'd read that DR seemed to lose focus or slip in quality if the prompt you provided had too many parts: again on the basis of this example I'd say the more guidance, the better.
As a side note, this is much better than the results from various Gemini models (1.5 Deep Research, 2.0 Pro Experimental and 2.0 Flash Thinking). I have been relying on those because of price (free in AI studio), long context (not that I really need 2 million tokens but it feels good knowing if I upload a PDF that I'm not going to break something), and generally high quality.
The links it cites are appropriate, if not all that "deeply" hidden. They're what I'd find on my own through five minutes of Googling, but then again my problem when researching there's no such thing as five minutes of Googling - only five hours of progressively more distracted tab opening that will never get synthesized.
That to me is the main takeaway. It's seductive to tell yourself that the bottleneck when working on a problem is that you don't have enough information, but with or without AI assistants that's very rarely true. Realistically, this report from Deep Research is 95% as good as what I'd come up with on my own. Which doesn't mean I'm now a subject matter expert, only that it's harder for me to pretend that the way to achieve that expertise would be reading one more paper on the subject.
we have important work to do