I just gave it a whirl. Pretty neat, but definitely watch out for hallucinations. For instance, I asked it to compile a report on myself (vain, I know.) In this 500-word report (ok, I'm not that important, I guess), it made at least three errors.
It stated that I had 47,000 reputation points on Stack Overflow -- quite a surprise to me, given my minimal activity on Stack Overflow over the years. I popped over to the link it had cited (my profile on Stack Overflow) and it seems it confused my number of people reached (47k) with my reputation, a sadly paltry 525.
Then it cited an answer I gave on Stack Overflow on the topic of monkey-patching in PHP, using this as evidence for my technical expertise. Turns out that about 15 years ago, I _asked_ a question on this topic, but the answer was submitted by someone else. Looks like I don't have much expertise, after all.
Finally, it found a gem of a quote from an interview I gave. Or wait, that was my brother! Confusingly, we founded a company together, and we were both mentioned in the same article, but he was the interviewee, not I.
I would say it's decent enough for a springboard, but you should definitely treat the output with caution and follow the links provided to make sure everything is accurate.
"Pretty neat, but definitely watch out for hallucinations."
We'd never hire someone who just makes stuff up (or at least keep them employed for long). Why are we okay with calling "AI" tools like this anything other than curious research projects?
Can't we just send LLMs back to the drawing board until they have some semblance of reliability?
Not sure if this was posted as humour, but I don't feel that way. In today's world, where I certainly would consider taking the blue pill, I'm having a blast with LLMs!
It has helped me learn stuff incredibly faster. Especially I find them useful for filling the gaps of knowledge and exploring new topics in my own way and language, without needing to wait an answer from a human (that could also be wrong).
Why does it feel, that "we are entirely inside the bubble" for you?
In the early days of ChatGPT where it seemed like this fun new thing, I used it to "learn" C. I don't remember anything it told me, and none of the answers it gave me were anything that I couldn't find elsewhere in different forms - heck I could have flipped open Kernighan & Ritchie to the right page and got the answer.
I had a conversation with an AI/Bitcoin enthusiast recently. Maybe that already tells you everything you need to know about this person, but to the hammer the point home, they made a claim to similar to you: "I learn much more and much better with AI". They also said they "fact check" things it "tells" them. Some moments later they told me "Bitcoin has its roots in Occupy Wall Street".
A simple web search tells you that Bitcoin is conceived a full 2 years before Occupy. How can they be related?
It's a simple error that can be fact checked simply. It's a pretty innocuous falsity in this particular case - but how many more falsehoods have they collected? How do those falsehoods influence them on a day-by-day basis?
How many falsehoods influence you?
A very well meaning activist posted a "comprehensive" list of all the programs that were to be halted by the grants and loans freezes last week. Some of the entries on the list weren't real, or not related to the freeze. They revealed they used ChatGPT to help compile the list and then went down one-by-one to verify each one.
With such meticulous attention to detail, incorrect information still filtered through.
I guess the real learning happens outside the AI, here in real life. Does the code run? Sure, it's on my local and not in production, but I would've never have the patience to get "that new thing working" without AI as assistant.
Does the food taste good? Oops, there's a bit too much vegetables here, they are never gonna fit in this pan of mine. Not a big deal, next time I'll be wiser.
AI is like a hypothesis machine. You're gonna have to figure out if the output is true. Few years ago, just testing any machine's "intelligence" was pretty quickly done and machine failed miserably. Now, the accuracy is astounishing in comparison.
> How many falsehoods influence you?
That is a great question. The answer is definitely not zero. I try to live by with a hacker mentality and I'm an engineer by trade. I read news and comments, which I'm not sure is good for me. But you also need some compassion towards oneself. It's not like ripping everything open will lead to salvation. I believe the truth does set you free, eventually. But all in one's time...
Anyway, AI is a tool like any other. Someone will hammer their fingers with it. I just don't understand the hate. It's not like we're drinking any AI koolaids here. It's just like it was 30 years ago (in my personal journey), you had a keyboard and a machine, you asked it things and got gibberish. Now the conversation with it just started to get interesting. Peace.
>It has helped me learn stuff incredibly faster. Especially I find them useful for filling the gaps of knowledge and exploring new topics in my own way and language
and then you verify every single fact it tells you via traditional methods by confirming them in human-written documents, right?
Otherwise, how do you use the LLM for learning? If you don't know the answer to what you're asking, you can't tell if it's lying. It also can't tell if it's lying, so you can't ask it.
If you have to look up every fact it outputs after it does, using traditional methods, why not skip to just looking things up the old fashioned way and save time?
Occasionally an LLM helps me surface unknown keywords that make traditional searches easier, but they can't teach anything because they don't know anything. They can imagine things you might be able to learn from a real authority, but that's it. That can be useful! But it's not useful for learning alone.
And if you're not verifying literally everything an LLM tells you.. are you sure you're learning anything real?
I guess it all depends on the topic and levels of trust. How can I be certain that I have a brain? I just have to take something for granted, don't I? Of course I will "verify" the "important stuff", but what is important? How can I tell? Most of the time only thing I need is a pointer in the right direction. Wrong advice? I know when I get there I suppose.
I can remember numerous things I was told while growing up, that aren't actually true. Either by plain lies and rumours or because of the long list of our cognitive biases.
> If you have to look up every fact it outputs after it does, using traditional methods, why not skip to just looking things up the old fashioned way and save time?
What is the old fashioned way? I mean people learn "truths" these days from Tiktok and Youtube. Some of the stuff is actually very good, you just have to distill it based on the stuff I was being taught at school. Nonody has yet declared LLMs as a subtitute for schools, maybe they soon will, but neither "guarantees" us anything. We could as well be taught political agendas.
I could order a book about construction, but I wouldn't build a house without asking a "verified" expert. Some people build anyway and we get some catastrofic results.
Levels of trust, it's all games and play until it gets serious, like what to eat or doing something that involves life threatening physics. I take it as playing with a toy. Surely something great have come up from only a few piece of legos?
> And if you're not verifying literally everything an LLM tells you.. are you sure you're learning anything real?
I guess you shouldn't do it that way. But really, so far the topics I've rigorously explored with ChatGPT for example, have been better than your average journalism. What is real?
Saying you need to verify "literally everything" both overestimates the frequency of hallucinations and underestimates the amount of wrong found in human-written sources. e.g. the infamous case of Google's AI recommending Elmer's glue on pizza was literally a human-written suggestion first: https://www.reddit.com/r/Pizza/comments/1a19s0/my_cheese_sli...
> without needing to wait an answer from a human (that could also be wrong).
The difference is you have some reassurances that the human is not wrong - their expertise and experience.
The problem with LLMs, as demonstrated by the top-level comment here, is that they constantly make stuff up. While you may think you're learning things quickly, how do you know you're learning them "correctly", for lack of a better word?
Until an LLM can say "I don't know", I really don't think people should be relying on them as a first-class method of learning.
"Occasional nonsense" doesn't sound great, but would be tolerable.
Problem is - LLMs pull answers from their behind, just like a lazy student on the exam. "Halucinations" is the word people use to describe this.
Those are extremely hard to spot - unless you happen to know the right answer already, at which point - why ask? And those are everywhere.
One example - recently there was quite a discussion about llm being able to understand (and answer) base16 (aka "hex") encoding on the fly, so I went on to try base64, gzipped base64, zstd-compressed base64, etc...
To my surprise, LLM got most of those encoding/compressions right, decoded/uncompressed the question, and answered it flawlessly.
But with few encodings, LLM detected base64 correctly, got compression algorithm correctly, and then... instead of decompressing, made up a completely different payload, and proceeded to answer that. Without any hint of anything sinister going.
We really need LLMs to reliably calculate and express confidence. Otherwise they will remain mere toys.
I think as these things get more integrated into customer service workflows - especially for things like insurance claims - there's gonna start being a lot more buyer's remorse on everyone's part.
We've tried for decades to turn people into reliable robots, now many companies are running to replace people robots with (maybe less reliable?) robot-robots. What could go wrong? What are the escalation paths going to be? Who's going to be watching them?
It's given you some information and now you have to seek out a source to verify that it's correct.
Finding information is hard work. It's why librarian is a valuable skilled profession. What you've done by suggesting that I should "verify" or "proofread" what a glorified, water-wasting Markov chain has given me now entails me looking up that information to verify that it's correct. That's...not quite doubling the work involved but it's adding an unnecessary step.
I could have searched for the source in the first instance. I could have gone to the library and asked for help.
We spent time coming up with a question ("prompt engineering"! hah!), we used up a bunch of electricity for an answer to be generated and now you...want me to search up that answer to find the source? Why did we do the first step?
People got undergraduate degrees - hell, even PhDs - before generative AI.
Look up the tweet from someone who said "Sometimes when coming up with a good prompt for ChatGPT, I sometimes come up with the answer myself without needing to submit".
Verifying information is an order of magnitude easier than compiling it or synthesizing it in the first place. Prompt engineering is an order of magnitude easier still. This is obvious to most people, but apparently it needs to be said.
An entire day of generating responses with ChatGPT uses less water and energy than your morning shower. You seem terribly concerned about signaling the virtues of abstaining from technology use on behalf of purported resource misuse, yet you're sitting at a computer typing away.
You're not a serious person, and you're wasting everyone's time. Please leave the internet and go play with rocks in a cave.
Sometimes you don't need sources to verify something is correct, its something you can directly verify. To reduce it to the easiest version of this, I ask for code to do something, it writes me code, I run my unit test, it passes, my time is saved!
For other things, it depends, but if I'm asking it to do a survey I can look at its results and see if they fit what I'm looking for, check the sources it gives me, etc. People pay analysts/paralegals/assistants to do exactly this kind of work all the time expecting that they will need to check it over. I don't see how this is any different.
I don't think the library/electricity responses are serious but to move on to the point about degrees... people also got those degrees before calculators, before computers, before air travel, before video calls, before the internet, before electricity, yet all of those things assist in creating knowledge. I think its perfectly reasonable to look at these LLMs/chat assistants in the same light: as a tool that can augment human productivity in its own way.
I'm interested to hear more about how you can verify information without a source. What are you looking at when you search for the verification, exactly?
You can use them for whatever you like, or not use them. Everyone has a different bar for when technology is useful. My dad doesn't think EVs are useful due to the long charge times, but there are others who find it fully acceptable.
This doesn’t make LLMs worthless, you just need to structure your processes around fallibility. Much like a well designed release pipeline is built with the expectation that devs will write bugs that shouldn’t ship.
Yeah, I used to hire people, but then one of them made a mistake, now I'm done with them forever, they are useless. It is not I, who is directing the workers, who cannot create a process that is resistant to errors, it's definitely the fact that all people are worthless until they make no errors as there truly is no other way of doing things other than telling your intern to do a task then having them send it directly to the production line.
LLM are "great" in some use cases, "ok" in others, and "laughable" in more.
Some people might find $500 worth of value, in their specific use case, in those "great" and "ok" categories, where they get more value than "lies" out of it.
A few verifiable lies, vs hours of time, could be worth it for some people, with use cases outside of your perspective.
I disagree that this is a useful springboard. And I say that as an AI optimist.
A report full of factual errors that a careful intern wouldn't make is worse than useless (yes, yes, I've mentored interns).
If the hard part is the language, then do the research yourself, write an outline, and have the LLM turn it into complete sentences. That would at least be faster.
Here's the thing, though: If you do that, you're effectively proving that prose style is the low-value part of the work, and may be unnecessary. Which, as much as it pains to me say as a former English major, is largely true.
What's faster? Writing a 500 word report "from scratch" by researching the topic yourself, vs. having AI write it then having to fact check every answer and correct each piece manually?
This is why I don't use AI for anything that requires a "correct" answer. I use it to re-write paragraphs or sentences to improve readability etc, but I stop short of trusting any piece of info that comes out from AI.
> Then it cited an answer I gave on Stack Overflow [...] using this as evidence for my technical expertise. Turns out that about 15 years ago, I _asked_ a question on this topic, but the answer was submitted by someone else
Artificial dementia...
Some parties are releasing products much earlier than the ability to ship well working products (I am not sure that their legal cover will be so solid), but database aided outputs should and could become a strong limit to that phenomenon of remembering badly. Very linearly, like humans: get an idea, then compare it to the data - it is due diligence and part of the verification process in reasoning. It is as if some moves outside linear pure product progress reasoning are swaying the RnD towards directions outside the primary concerns. It's a form of procrastination.
I wonder if it’s carried over too much of that ‘helpful’ DNA from 4o’s RLHF. In that case, maybe asking for 500 words was the difficult part — it just didn’t have enough to say based on one SO post and one article, but the overall directives assume there is, and so the model is put into a place where it must publish..
Put another way, it seems this model faithfully replicates the incentives most academics have — publish a positive result, or get dinged. :)
Did it pick up your HN comments? Kadua claims that’s more than enough to roast me, … and it’s not wrong. It seems like there’s enough detail about you (or me) there to do a better job summarizing.
I didn't actually give it a goal of writing any particular length, but I do think that perhaps given my not-so-large online footprint, it may have felt "pressured" to generate content that simply isn't there.
It didn't pick up my HN comments, probably because my first and last name are not in my profile, though obviously that is my handle in a smooshed-together form.
This is very bearish for current AI. Seems like 99% reliability is still too small with compounding errors. But I wonder of this is inherently specific to longer context or if this just depends on how it’s trained. In theory longer context => more errors
Although I think people are the same, too big problem and you are getting lost unless taking it in bites, so seems like OpenAI implementation is just bad because o3 hallucination benchmark shouldn’t lead to such poor performance
This is... very uncomfortable. An (expanded) AI summary of my HN and reddit usage would appear to be a pretty complete representation of my "online" identity/character. I remember when people would browse your entire comment history just to find something to discredit you on reddit, and that behavior was _heavily_ discouraged. Now, we can just run an AI model to follow you and sentence you to a hell of being permanently discredited online. Give it a bunch of accounts to rotate through, send some voting power behind it (reddit or hn), and just pick apart every value you hold. You could obliterate someone's will to discuss anything online. You could effectively silence all but the most stubborn, and those people you would probably drive insane.
It's a very interesting usecase though, filter through billions of comments and give everyone a score on which real life person they probably are. I wonder if say, Ted Cruz hides behind a username somewhere.
I put my profile in [0] and it's mostly silly; a few comments extracted and turned into jokes. No deep insights into me, and my "Top 3 Technologies" are hilariously wrong (I've never written a single line of TypeScript!)
That.. seems to just take a few (three or four) random comments that received some attention and then extrapolate an entire profile based on (incorrectly) interpreting their contents?
For “deep research” I’m also reading “getting the answers right”.
Most people I talk to are at the point now where getting completely incorrect answers 10% of the time — either obviously wrong from common sense, or because the answers are self contradictory — undermines a lot of trust in any kind of interaction. Other than double checking something you already know, language models aren’t large enough to actually know everything. They can only sound like they do.
What I’m looking for is therefore not just the correct answer, but the correct answer in an amount of time that’s faster than it would take me to research the answer myself, and also faster than it takes me to verify the answer given by the machine.
It’s one thing to ask a pupil to answer an exam paper to which you know the answers. It’s a whole next level to have it answer questions to which you don’t know the answers, and on whose answers you are relying to be correct.
My worry is that all these recent capabilities attempt to minimize hallucinations by relying on extensive web search, however web itself is being actively degraded by unfiltered LLM output. After a certain point running your research agent against a ~5-year-old snapshot of the web will be strictly more accurate (for non-current affairs queries) than querying live web.
It's really worrying to me, even as a self proclaimed "LLM <-> AI" skeptic, to see what kind of stuff people pretend to get out from an LLM. Typewriter monkeys as a service almost.
Still useful for the odd task here and there, but not as useful as all the money being invested in this (except for the companies getting that money, that is).
> Typewriter monkeys as a service almost. // Still useful for the odd task here and there
1) Paramount task: searching in naturally structured language, as opposed to keywords. Odd tasks: oh yes, several tasks of fuzzy sophisticated text processing previously unsolved.
2) They translate NN encodings in natural language! The issue remains about the quality of /what/ they translate in natural language, but one important chunk of the problem* is in a way solved...
Now, I've been probably one of the most vocal here, shouting "That's the opposite of intelligence!" - even in the past 24 hours -, but be objective: there are also progresses ...
(* Around five years ago we were still stuck with Hinton's problem of interpreting pronouns as pointers in "the item won't fit in the case: it's too big" vs "the item won't fit in the case: it's too small" - look at it now...)
Of course I see progress, but I feel like the bridge of "thinks" versus "regurgitates" is still far off, if it is still in the horizon with the current approach. IMHO.
edit: furthermore, LLMs probably tackle very little "real state" in the "make machines THINK" land. But a crucial piece on the overall puzzle.
> What I’m looking for is therefore not just the correct answer, but the correct answer in an amount of time that’s faster than it would take me to research the answer myself, and also faster than it takes me to verify the answer given by the machine.
This is why I haven't found AI tools very useful. I find my self spending more time verifying and fixing it's answers than I would have just doing or learning the darn thing myself.
It is added cognitive load, but there is a lot of value in async tasks if you can trust the output or if the opportunity cost of validating is low.
The challenge with something like this for research, in its current state, is you’ll need to go double check it because you don’t trust it and it will end up effectively being a list of links.
It’s progress though and evidently good enough to find a sweet NSX in Japan, which is all some really need.
> and also faster than it takes me to verify the answer given by the machine.
I always thought there was a kind of NP-flavor to the problems for which LLMs-like AI are helpful in practice, in the sense that solving the problem may be hard but checking the solution must be fast.
Unless the domain can accomodate errors/hallucination, checking the solution (by a human) should be exponentially faster than finding it (by some AI) otherwise there's little practical gain.
> Most people I talk to are at the point now where getting completely incorrect answers 10% of the time
A year back that number was 30%, and a couple of years back it was 60%. There will be a point where it'll be good enough. There are also better and better ways to verify answers these days.
It'll never be a solution for everything, but that's similar to many engineering problems we have: for example, ORMs aren't great for all types of queries, but they're sufficient for a good part of them.
It contributes little to discuss a hypothetical future. Maybe we'll have fusion energy, delivery drones, everyone using VR, etc. Maybe we will go into a deep recession due to trade wars, or maybe not.
The meaningful discussion is about how they perform NOW and the edge cases that have persisted since GPT-2 which no one has yet found a good solution for.
I disagree though, it is useful as this problem has been whittled down and I think there is expectation that there will be continued effort. Its of course worth discussing but I find that for my workflows, I rarely encounter issues with hallucinations, they certainly exist but its gotten to a point that I don't have major issue with it.
At best, a proof of concept of experimental delivery drones exist, but only for small, lightweight items, and only in a few places, only in the right weather, and only if you place a target on your driveway and are there to receive the item in person, and all at the cost of a very high noise level. That's not exactly a real service.
You are sort of moving the goal post. The fact remains, drone delivery exists and is a solved problem. Major metro areas like Dallas, Texas have it through retailers like Walmart. Just because it does not meet your specific goal post does not mean it's a proof of concept.
More importantly, I think, is that they are incapable of not doing so. Have we figured out how to make an LLM realize and answer that it doesn't know an answer?
> language models aren’t large enough to actually know everything
I'd say they don't know anything.
An LLM base model, before it is post-trained with RL, just has access to a sliced and diced corpus of human output. Take the contents of 4chan and WikiPedia, put in blender and mix and chop into "training sample" sized bites, then learn the statistical regularities of this blended mess. It is what it is - not exactly what I'd call a knowledge base, even though there are bits of knowledge in there.
When you add RL-based post-training for reasoning, all you are doing is trying get the model to be more selective when you are sampling from it - encouraging it to suppress some statistics, and emphazise others, such that when you sample from it the output looks more like valid reasoning steps and/or conclusions, per the verified reasoning examples you train it on.
I'm well aware of how useful RL-tuned models (whatever the goal) can be, but at the end of the day all they are doing is taking a statistical babbler and saying "try to output patterns more like this". It's not exactly a recipe for factuality or rationality - we've just gone from hallucination-prone base models, to gaslighting-prone RL-tuned "reasoning" models that output stuff that sounds like coherent reasoning.
What missing from all of this - what makes it different from how animals learn - it that the model has no experience of it's own, no autonomy or motivation to explore, learn and verify, and hence no episodic memories of how it learnt something (tried it and ran controlled experiments, or just overheard it on the bus), and what that implies about it's trustworthiness.
It's amazing that LLMs work as well as they do, a reflection of how much of what we do can be accomplished by reactive pattern matching, but if you want to go beyond that to something that can learn and discern the truth for itself, this seems the wrong paradigm altogether.
The amusing thing is that despite all you describe, and which many other people have described on this site and in others in great detail about the nature of LLMs, there are still many people who believe that these models essentially possess intelligence, and that it's little different from how a human mind expresses cognition.
I've seen this thinking to be especially prevalent among tech types (especially among many comments on this site), and more so than among average non-tech people I know.
It seems to be a reflection of a certain forced, almost ideological techno-reductionist thinking against the honestly complex and largely mysterious nature of consciousness. Many non-tech people on the other hand accept this mystery of consciousness and paradoxically are thus less likely to consider an LLM to be anything deeper than the clever but mindless pattern-matching trick that it is
I mean this all falls down due to the need of verification:
"Limitations
Deep research unlocks significant new capabilities, but it’s still early and has limitations. It can sometimes hallucinate facts in responses or make incorrect inferences"
How do I know which parts are false? It will take as long to verify as to research!
This is terrifying. Even though they acknowledge the issues with hallucinations/errors, that is going to be completely overlooked by everyone using this, and then injecting the outputs into their own powerpoints.
Management Consulting was bad enough before the ability to mass produce these graphs and stats on a whim. At least there was some understanding behind the scenes of where the numbers came from, and sources would/could be provided.
The more powerful these tools become, the more prevelant this effect of seepage will become.
Either you care about being correct or you don't. If you don't care then it doesn't matter whether you made it up or the AI did. If you care then you'll fact check before publishing. I don't see why this changes.
When things are easy, you’re going to take the easy path even if it means quality goes down. It’s about trade offs. If you had to do it yourself, perhaps quality would have been higher because you had no other choice.
Lots of kids don’t want to do homework. That said, previously many would because there wasn’t another choice. But now they can just ask ChatGPT for the answers they’ll write that down verbatim with zero learning taking place.
Caring isn’t a binary thing or works in isolation.
Because maybe you want to, but you have a boss breathing down your neck and KPIs to meet and you haven't slept properly in days and just need a win, so you get the AI to put together some impressive looking graphs and stats that will look impressive in that client showcase thats due in a few hours.
Things aren't quite so black and white in reality.
I mean those same conditions already just lead the human to cutting corners and making stuff up themselves. You're describing the problem where bad incentives/conditions lead to sloppy work, that happens with or without AI
Catching errors/validating work is obviously a different process when they're coming from an AI vs a human, but I don't see how it's fundamentally that different here. If the outputs are heavily cited then that might go someway into being able to more easily catch and correct slip-ups
Making it easier and cheaper to cut corners and make stuff up will result in more cut corners and more made up stuff. That's not good.
Same problem I have with code models, honestly. We already have way too much boilerplate and bad code; machines to generate more boilerplate and bad code aren't going to help.
Yep, I agree with this to some extent, but I think the difference in the future is all that stress will be bypassed and people will reach for the AI from the start.
Previously there was alot of stress/pressure which might or might not have led to sloppy work (some consultants are of a high quality). With this, there will be no stress which will (always?) lead to sloppy work. Perhaps there's an argument for the high quality consultants using the tools to produce accurate and high quality work. There will obviously be a sliding scale here. Time will tell.
I'd wager the end result will be sloppy work, at scale :-)
I think a lot about how differentiating facts and quality content is like differentiating signal from noise in electronics. The signal to noise ratio on many online platforms was already quite low. Tools like this will absolutely add more noise, and arguably the nature of the tools themselves make it harder to separate the noise.
I think this is a real problem for these AI tools. If you can’t separate the signal from the noise, it doesn’t provide any real value, like an out of range FM radio station.
It's possible that you care, but the person next to you doesn't, and external pressures force you to keep up with the person who's willing to shovel AI slop. Most of us don't have a complete luxury of the moral high ground at our jobs.
Maybe this would make sense if you saw the whole world as "kids" that you had to protect. As an adult who lives in an adult world, I would like adults to have access to metal tools and not just foam ones.
don't you think the problem of checking for correctness then becomes more insidious then? we now can generate hundreds of reports that look very professional on the surface. the usual things that would tip you off that this person was careless aren't there -- typos, poor sentence construction, missing references. just more noise to pick through for signal
> If you care then you'll fact check before publishing.
Doing a proper fact check is as much work as doing the entire research by hand, and therefore, this system is useless to anyone who cares about the result being correct.
> I don't see why this changes.
And because of the above this system should not exist.
If 20% of people don't care about being correct, the rest of everyone can deal with that. If 80% of people don't care about being correct, the rest of us will not be able to deal with that.
Same thing as misinformation. A sufficient quantitative difference becomes a qualitative difference at some point.
Then the hallucinated research is published in an article which is then cited by other AI research, continuing the push the false information until it’s hard to know where the lie started.
The majority of human written consultant reports are already complete rubbish. Low accuracy, low signal-to-noise, generic platitudes in a quantity-over-quality format.
LLMs are innoculating people to this kind of low information value content.
People who produce LLM quality output, are now being accused of using LLMs, and can no longer pretend to be adding value.
The result of this is going to be higher quality expectations from consultants and a shaking out of people who produce word vommit rather than accurate, insightful, contextually relevent information.
I don't think so. Instead of SEO, I think we'll soon see 'LLMO' dominating such uses, where LLM summaries are reshaped by vendors and etailers to misrepresent facts in ways that favor them over others.
I suspect this can be done simply by poisoning a query with supplemental suggestions of sources to use in a RAG, many of which don't even have to be publicly available but are made accessible to the LLM (perhaps by submitting hidden URLs that mislead the summary along with the query).
But even after such a practice is uncovered and roundly maligned, that won't stop the infinite supply of net con men from continuing to inject their poisons into the background that drives deep research, so long as the LLM maker doesn't actively oppose this practice actively and publicly -- which none of them have been willing to do with any other LLM operational details so far.
In fact, I predict that if a LLM summary like DR's does NOT soon provide references to the sources of the facts it relies on, in no time users will disregard such summaries to be yet more uselessly unreliable pfaff from yet another net disreputable -- as we do with search engine summaries now.
let's be real for a sec, i've done consulting and have a lot of friends who still do. three times in four, your mckinsey report isn't super well-founded in reality and involves a lot of guesstimation.
Hi tmnvdb, since you seem to love these super smart LLMs I thought it would be fun to have openais o3-mini-high analyze your recent comments in contrast to the Hacker News Comment Guidelines. Here is the output it gave me, hope it helps you:
------
Hey, I've noticed a few things in your style that are both strengths and opportunities for improvement:
Strengths:
- You clearly have deep knowledge and back up your points with solid data and examples.
- Your confidence and detailed analysis make your arguments compelling.
Opportunities:
- At times, your tone can feel a bit combative, which might shut down conversation.
- Focusing on critiquing ideas rather than questioning someone's honesty can help keep the discussion constructive.
- A clearer structure in longer posts could make your points even more accessible.
Overall, your passion and expertise shine through—tweaking the tone a bit might help foster even more productive debates.
------
Just reply here if you want the full 500+ words analysis that goes into more detail.
"PS: The name Triton was coined in mid-2019 when I released my PhD paper on the subject. I chose not to rename the project when the "TensorRT Inference Server" was rebranded as "Triton Inference Server" a year later since it's the only thing that ties my helpful PhD advisors to the project."
It's a sort of unofficial trade association where they coalesce on specific redefinitions of terms to meet their sales and PR efforts. First they came for "intelligence," then "open source," then "reason," and it will continue. Any word which the PR wants but they can't achieve gets redefined -- "grok" is a perfect example, since in the original sci-fi book it meant "total understanding." The mythological Triton ruled the deeps, so the "deep learning" sales copy immediately co-opted it.
Would have loved to try it when they released it, but I'm apparently in the wrong country. I think it's not available outside the US (?). OpenAI and DeepSeek have no such issues. It's a bummer really, I'm happy paying for this but they don't want me to.
I am afraid Gemini's version is not really very "deep" - it surfaces a lot of information, but on a quite superficial level. OAIs version seems to make that one step forward to proper depth.
We found in our experience it is pretty hard to force LLM to do something in proper depth, and OAI's deep research definitely feels like one of the first examples from big labs on how this can be done.
What we typically see is that it is not even the "agent" part that is hard to do, but how to force model to not "forget" to go deep...
I've always thought the Triton situation was intentional since the name isn't generic and because the companies are stepping on each others toes here (Nvidia's Triton simplifying owning your inference; OpenAI's Triton eroding the need for familiarity with CUDA). I couldn't figure out who publicly used the name first though.
> what's with all of the naming overlap in the AI world? Triton (Nvidia, OpenAI) and Gro{k,q} (X.ai, groq, OpenAI) all come to mind
They seem to be ok with outsourcing any and all creativity to a language model, so it’s not surprising that they can’t come up with unique names themselves.
Not sure if people picked up on it, but this is being powered by the unreleased o3 model. Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly. Seems to be quite an impressive model and the leading out of Google, DeepSeek and Perplexity.
> Which might explain why it leaps ahead in benchmarks considerably and aligns with the claims o3 is too expensive to release publicly
It's the only tool/system (I won't call it an LLM) in their released benchmarks that has access to tools and the web. So, I'd wager the performance gains are strictly due to that.
If an LLM (o3) is too expensive to be released to the public, why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model. Most likely o3-mini or o1-mini combined with o4-mini for some tasks.
>why would you use it in a tool that has to make hundreds of inference calls to it to answer a single question? You'd use a much cheaper model.
The same reason a lot of people switched to GPT-4 when it came out even though it was much more expensive than 3 - doesn't matter how cheap it is if it isn't good enough/much worse.
They’ve only released o3-mini, which is a powerful model but not the full o3 that is being claimed as too expensive to release. That being said, DeepSeek for sure forced their hand to release o3-mini to the public.
I guess the question is, did DeepSeek force them to rethink pricing? It's crazy how much cheaper it (v3 and R1) is, but considering they (Deepseek) can't keep up with demand, the price is kind of moot right now. I really do hope they get the hardware to support the API again. The v3 and R1 models that are hosted by others are still cheap compared to the incumbents, but nothing can compete with DeepSeek on price and performance.
Interesting, thanks for highlighting! Did not pick up on that. Re:"leading", tho:
Effectiveness in this task environment is well beyond the specific model involved, no? Plus they'd be fools (IMHO) to only use one size of model for each step in a research task -- sure, o3 might be an advantage when synthesizing a final answer or choosing between conflicting sources, but there are many, many steps required to get to that point.
I don't believe we have any indication that the big offerings (claude.ai, Gemini, operator, tasks, canvas, chatgpt) use multiple models in one call (other than for different modalities like having Gemini create an image). It seems to actually be very difficult technically and I'm curious as to why.
I wonder how much of an impact our being still so early in the productization phase of this all is. Like it takes a ton of work and training and coordination to get multiple models synced up into an offering and I think the companies are still optimizing for getting new ideas out there rather truly optimizing them.
I’m not sure if you’re implying this subtly in your comment or not, as it’s early here, but it does of course need to be a generation ahead of what 10 months of their competitors moving forward have done too. Nobody is standing still
> Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters.
If that's what you're referring to, then it doesn't seem that "explicit" to me. For example, how do we know that it doesn't use less thinking than o3-mini? Google's version of deep research uses their "not cutting edge version" 1.5 model, after all. Are you referring to something else?
o3-mini is not really "a version of the o3 model", it is a different model (less parameters). So their language strongly suggests, imo, that Deep Research is powered by a model with the same number of parameters as o3.
OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?
It is actually interesting for people working in academia. I would like to test it but no way I can afford $200/m right now.
Can someone test it with this prompt.
"As a research assistant with comprehensive knowledge of particle physics, please provide a detailed analysis of next-generation particle collider projects currently under consideration by the international physics community.
The analysis should encompass the major proposed projects, including the Future Circular Collider (FCC) at CERN, International Linear Collider (ILC), Compact Linear Collider (CLIC), various Muon Collider proposals, and any other significant projects as of 2024.
For each proposal, examine the planned energy ranges and collision types, estimated timeline for construction and operation, technical advantages and challenges, approximate costs, and key physics goals. Include information about current technical design reports, feasibility studies, and the level of international support and collaboration.
Present a thorough comparative analysis that addresses technical feasibility, cost-benefit considerations, scientific potential for new physics discoveries, timeline to first data collection, infrastructure requirements, and environmental impact. The projects should be compared in terms of their relative strengths, weaknesses, and potential contributions to advancing our understanding of fundamental physics.
Please format the response as a structured technical summary suitable for presentation at a topical meeting of particle physicists. Where appropriate, incorporate relevant figures and tables to facilitate clear comparisons between proposals. Base your analysis on information from peer-reviewed sources and official design reports, focusing on the most current available data and design specifications.
Consider the long-term implications of each proposal, including potential upgrade paths, flexibility for future modifications, and integration with existing research infrastructure."
Hopefully helpful, happy to test others for you :)
--
Raw output
--
Understood. I will conduct a detailed technical analysis of next-generation particle collider projects, focusing on the Future Circular Collider (FCC), International Linear Collider (ILC), Compact Linear Collider (CLIC), Muon Collider proposals, and other significant initiatives.
The report will include:
- Planned energy ranges and collision types
- Estimated construction and operation timelines
- Technical advantages and challenges
- Approximate costs
- Key physics goals
- Review of technical design reports and feasibility studies
- International collaboration and support
- Comparative analysis on feasibility, cost-benefit, scientific potential, infrastructure needs, and environmental impact
- Long-term considerations, including upgrade paths and integration with existing facilities
The analysis will be formatted as a structured technical summary, suitable for presentation at a topical meeting of particle physicists, with comparative tables and figures where relevant.
I will update you when the report is ready.
# Next-Generation Particle Collider Projects: A Comparative Technical Analysis
## Introduction
High-energy physics is charting plans for new colliders to extend beyond the LHC and its High-Luminosity upgrade (HL-LHC). The discovery of the 125 GeV Higgs boson has motivated proposals for a dedicated “Higgs factory” lepton collider to study its properties with precision, as well as future energy-frontier machines to search for new physics. Several next-generation collider initiatives are under consideration internationally as of 2024. These include CERN’s proposed *Future Circular Collider (FCC)*, Japan’s *International Linear Collider (ILC)*, CERN’s *Compact Linear Collider (CLIC)*, various designs for a *Muon Collider*, China’s *Circular Electron-Positron Collider (CEPC)* and its successor *Super Proton-Proton Collider (SppC)*, among others. Each proposal differs in collision type (electron-positron, proton-proton, muon-muon, etc.), energy scale, technology, timeline, cost, and physics focus. This summary reviews each project’s key parameters – *planned energy ranges, collision types, timeline, technical advantages/challenges, cost, and physics goals* – based on technical design reports and feasibility studies. A comparative analysis then contrasts their *technical feasibility, cost-benefit, scientific potential for discoveries, timeline to first data, infrastructure needs, and environmental impact*, highlighting the relative strengths and weaknesses of each approach. We also discuss long-term implications such as upgrade paths, flexibility for future modifications, and integration with existing infrastructure.
(Citations refer to official reports and peer-reviewed sources using the format 【source†lines】.)
## Future Circular Collider (FCC) – CERN
- *Type and Energy:* The FCC is a *proposed 100 km circular collider* at CERN that would be realized in stages. The first stage, *FCC-ee*, is an electron-positron ($e^+e^-$) collider with center-of-mass energy tunable from ~90 GeV up to 350–365 GeV, covering the Z boson pole, WW threshold, Higgs production (240 GeV), and top-quark pair threshold (~350 GeV). A second stage, *FCC-hh*, would use the same tunnel for a proton-proton collider at up to *100 TeV* center-of-mass energy (an order of magnitude above the LHC’s 14 TeV). Heavy-ion collisions (e.g. Pb–Pb) are also envisioned. An *FCC-eh* option (electron-hadron collisions) is considered by adding a high-energy electron injector to collide with the proton beam. This integrated FCC program thus spans both *precision lepton* collisions and *energy-frontier hadron* collisions.
- *Timeline:* The conceptual schedule foresees *FCC-ee construction in the 2030s* and a start of operations by around *2040* (as the LHC/HL-LHC program winds down). According to the FCC Conceptual Design Report, an $e^+e^-$ Higgs factory could begin delivering physics in ~2040, running for 15–20 years. The *hadron collider FCC-hh* would be constructed subsequently (using the same tunnel and upgraded infrastructure), aiming for *first proton-proton collisions in the late 2050s】. This staged approach (lepton collider first, hadron later) mirrors the successful *LEP–LHC sequence*, leveraging the $e^+e^-$ machine to produce great precision data (and to build infrastructure) before pushing to the highest energies with the hadron machine.
...
Thank you very much for doing that. It is actually somehow impressive. It got a lot of big picture comparison and points correct. There are problem with some details but overall it does save some work for initial search process.
What I like is that it asked you before clarifying questions before but I wonder if it just generic. Because the prompt mentioned that this would be for "presentation at a topical meeting of particle physicists" but still asked its last question about
> Intended Audience: Should the analysis assume a general physics audience or a more specialized group of particle physicists?
Also probably expected but it didn't include or reference graphs/plots.
Is this ability really a prerequisite to AGI and ASI?
Reasoning, problem solving, research validation - at the fundamental outset it is all refinement thinking.
Research is one of those areas where I remain skeptical it is that important because the only valid proof is in the execution outcome, not the compiled answer.
For instance you can research all you want about the best vacuum on the internet but until you try it out yourself you are going to be caught in between marketing, fake reviews, influencers, etc. maybe the science fields are shielded from this (by being boring) but imagine medical pharmas realizing that they can get whatever paper to say whatever by flooding the internet with their curated blog articles containing advanced medical “research findings”. At some point you cannot trust the internet at all and I imagine that might be soon.
I worry especially with the rapidly changing landscape of the amount of generated text in the internet that research will lose a lot of value due to massive amounts of information garbage.
It will be a thing we used to do when the internet was still “real”.
> For instance you can research all you want about the best vacuum on the internet but until you try it out yourself you are going to be caught in between marketing, fake reviews, influencers, etc.
So you wouldn't use this tool for those types of use cases.
But still, a valid point. I recall I once wanted to compare Hydroflask, Klean Kanteen and Thermos to see how they perform for hot/cold drinks. I was looking specifically for articles/posts where people had performed actual measurements. But those were very hard to find, with almost all Google hits being generic comparisons with no hard data. That didn't stop them from ranking ("Hydroflask is better for warm drinks!")
Would I be able to get this to ignore all of those and use only ones where actual experiments were performed. And moreover, filter out duplicates (e.g. one guy does an experiment, and several other bloggers link to his post and repeat his findings in their own posts - it's one experiment but with many search results).
It's a direction in a vast landscape, not a feature of itself - being better at different tasks, like search generally, and research in conjunction with reasoning, gets the model closer to AGI. An AGI will be able to do these tasks - so the point of the research is to have more Venn diagrams of capabilities like these to help narrow down the view on things that might actually be fundamental mechanisms involved in AGI.
Moravec detailed the idea of a landscape of human capabilities slowly being submerged by AI capabilities, and the point at which AI can do anything a human can, in practice or in principle, we'll know for certain we've reached truly general AI. This idea includes things like feeling pain and pleasure, planning, complex social, oral, and ethical dynamics, and anything else you can possibly think of as relevant to human intelligence. Deep Research is just another island being slowly submerged by the relentless and relentlessly accelerating flood.
Are we not machines anyway ? Ofc a machine can feel, just need to have priorities that are aligned to itself, and use strong feedback when that self is either in danger or on the right path to preservation...
If I understood the graphs correctly, it only achieves 20% pass rate on their internal tests. So I have to wait 30min and pay a lot of money just to sift through walls of most likely incorrect text?
Unless the possibility of hallucinations is negligible, this is just way too much content to review at once. The process probably needs to be a lot more iterative.
Here's an example of the type of question it is acheiving 20% on;
The set of natural transformations between two functors F,G :C→DF,G:C→D can be expressed as the end
Nat(F,G)≅∫AHomD(F(A),G(A)).
Nat(F,G)≅∫A HomD (F(A),G(A)).
Define set of natural cotransformations from FF to GG to be the coend
CoNat(F,G)≅∫AHomD(F(A),G(A)).
CoNat(F,G)≅∫AHomD (F(A),G(A)).
Let:
- F=B∙(Σ4)∗/F=B∙ (Σ4 )∗/ be the under ∞∞-category of the nerve of the delooping of the symmetric group Σ4Σ4 on 4 letters under the unique 00-simplex ∗∗ of B∙Σ4B∙ Σ4 .
- G=B∙(Σ7)∗/G=B∙ (Σ7 )∗/ be the under ∞∞-category nerve of the delooping of the symmetric group Σ7Σ7 on 7 letters under the unique 00-simplex ∗∗ of B∙Σ7B∙ Σ7 .
How many natural cotransformations are there between FF and GG?
As someone who doesn't understand anything beyond the word 'set' in that question, can anyone give an indication of how hard of a problem that actually is (within that domain)?
Also I'm curious as to what percentage of the questions in this benchmark are of this type / difficulty, vs the seemingly much easier example of "In Greek mythology, who was Jason's maternal great-grandfather?".
I'd imagine the latter is much easier for an LLM, and almost trivial for any LLM with access to external sources (such as deep research).
btw isn't this question at least really badly worded (and maybe incorrect?) the definitions they give for F and G are categories not functors... (and both categories are in fact one object with contractible space of morphisms...)
It's very interesting to think about what kind of "mental model" might it have, if it's capable of "understanding" all this (to me) gibberish, but is then unable to actually work the problem.
Did you intentionally flip through all the questions to find the one that seemed the easiest? If so, why? That's question #7, and all other 7 questions in the sample set seem ridiculously difficult to me.
it tests syllogistic reasoning: Jason's mother was Tyro, whose father was Poesidon, whose father was Kronos. it also tests whether it "eagerly" rather than comprehensively considers something: a maternal great-grandfather could be the father of either one's maternal grandmother or maternal grandfather. so the answer could also be king Aeolus of the Etruscans.
ideally a model would be able to answer this accurately and completely.
I think there are more possible answers? Jason's mother differs depending on the author...
For example, Jason's mother was Philonis, daughter of Mestra, daughter of Daedalion, son of Hesporos. So Jason's maternal great-grandfather was Hesporos.
LLMs often don't do well on tasks that require composition into smaller subtasks. In this case there is a chain of relations that depend on the previous result.
No it is not an actual question on this exam. From the paper: “To ensure question quality and integrity, we enforce strict submission criteria. Questions should be precise, unambiguous, solvable, and non-searchable, ensuring models cannot rely on memorization or simple retrieval methods. All submissions must be original work or non-trivial syntheses of published information, though contributions from unpublished research are acceptable. Questions typically require graduate-level expertise or test knowledge of highly specific topics (e.g., precise historical details, trivia, local customs) and have specific, unambiguous answers…”. (Emphasis mine)
Maybe. Not enough data to say. Say it does a days worth of work in a query. It is sensible to use if it takes less than a day to review ~5 days worth of work. I don't know if we're near that threshold yet but conceptually this would work well for actual research where the amount of preparation is large compared to the amount of output written.
And eyeballing the benchmarks, it'll probably reach a >50% rate per query by the end of the year. Seems to double every model or two.
Yeah it can be more iterative. Just use individual queries and build on it yourself. This is all this is doing. It's a trick, and OpenAI is a PR hype company at this stage.
The difference is that it takes few minutes to an hour at most so it can be run multiple times a day, using the results of previous runs to further refine the search and reasoning process to get better outcomes. Pretty much how any human research works but much faster and with potentially vastly more world-knowledge and reasoning capability than average humans. And these capabilities will rapidly improve with further RL.
There are some people in the blogosphere who are known experts in their niche or even niche-famous because they write popular useful stuff. And there are a ton more people who write useful stuff because they want that 'exposure.' At least, they do in the very broadest sense of writing it for another human to read it. I wonder if these people will keep writing when their readership is all bots. Dead internet here we come.
I'm all for writing just for the bots, if I can figure it out. A lot of academic papers aren't really read anyways, just briefly glanced at so they can be cited together, large publications like journal pubs or dissertations even less so. But the ability to add to a world of knowledge that is very easy to access by people who want to use it...that is very appealing to me as an author. No more trudging through a bunch of papers with titles that might be relevant to what I want to know about...and no more trudging through my papers, I'm OK with that.
Of course they will. Loads of people go around taking hundreds of photos with the biggest camera they can afford even though no-one else will ever willingly look at them.
Does anyone actually have access to this? It says available for pro users on the website today - I have pro via my employer but see no "deep research" option in the message composer.
what about a full refresh of the page or perhaps jump into the dev tools and check "disable cache"
could also be aggressive caching from cloudflare. could be they're just trying to announce more stuff to maintain cachet and can't yet support all users forking over 200/month.
I have access as of ~3 hours ago. Using the Win desktop app too, which is behind on some features (Operator, tasks). I open up any of the models and it shows up as a `(Deep research)` tag on the input field next to the web search option. Didn't clear cache or anything.
OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?
Feels like only a matter of time before these crawlers are blocked from large swathes of the internet. I understand that they’re already prohibited from Reddit and YouTube. If that spreads, this approach might be in trouble.
TBF OpenAI in particular bought access to Reddit. Otherwise yeah this is my main confusion with all of these products, Perplexity being the biggest -- how do you get around the status-quo of refusing access to bots? Just to start off with, there is no Google Search API, and they work hard to make sure headless browsers can't access the normal service.
They do say "Currently, deep research can access the open web...", so maybe "open" there implies something significant. Like, "websites that have agreements with OpenAI and/or do not enforce norobot policies".
> Amazon listings are blocked from google shopping
I see Amazon results there all the time. 3 of the visible 8 sponsored results are Amazon, in the non-sponsored results an Amazon listing is either first or second in every category.
While people might attempt that, it's going to be an arms race, just like ads vs adblocks. There's already multiple crawlers that present fake user-agent when their original one is blocked. Temptation of more data is just to irresistible to them
I suppose there is an equilibrium, where sites that penalize these types of crawlers will also get less traffic from people reading ai citations, so for many sites the upsides of allowing it will be greater than the downsides.
This is trivially bypassed by OpenAI asking the user to take control of their computer (or a sandboxed browser within it,) then for all intents and purposes it’s the user themselves accessing your site (with some productivity/accessibility aid from OAI.)
"Deep research" is now somehow synonymous to searching online for stats and pulling stuff from Statista? And when I want to make changes to that report, do I have to tweak my prompt and get an entirely different document?
Not sure if I'm too tired and can't see it but the lack of images/examples of the resulting report in this announcement doesn't inspire a lot of confidence just yet.
This is 5-10 years out. What OpenAI is displaying here I've been able to do with relatively little code, a bit of scraping and far less capable models for a year. I really don't see what is novel or useful here.
One of the biggest issues with these things is reliability. o3 likely increases that quite a bit. The idea itself isn't novel but I don't see why this wouldn't be useful?
I had no idea there was a market for "Compile a research report on how the retail industry has changed in the last 3 years. Use bullets and tables where necessary for clarity." I imagine reading such a result is pure torture.
Can anyone confirm if this is available in Canada and other countries? This site says "We are still working on bringing access to users in the United Kingdom, Switzerland, and the European Economic Area." But I'm not sure about other countries. I don't have Pro currently, only Plus.
https://help.openai.com/en/articles/10500283-deep-research-f...
"Pro users (located outside the UK, Switzerland, and the European Economic Area) have access to 100 deep research tasks per month." So to me that must include Canada, right? I signed up for Pro. There it is. (For future reference, for someone else.)
The accuracy of this tool does not matter. This is exclusively designed for box ticking "reports" that nobody reads and a produced for the sake of itself.
The new term for this is "AI Loopidity", highlighting the unintelligent ouroboros nature of one side using AI to generate content and then another side to consume content.
“Pencil-neck” is a strange insult to use here. How are software developers, or hardware design engineers, or finance workers any less “pencil-neck” than “board of directors”?
It absolutely can replace the research done by one person, for my use case at least. It’s also available on their $20/month subscription, unlike OpenAI’s $200/month.
Oh God, this is such an astute observation. I think it worked so well on me that I didn't even think about the "deep" portion initially. Goes to show how effective these things are psychologically.
I have never believed a conspiracy theory more instantly. Deep Search vs. DeepSeek is way more than enough to confuse the average layman! Especially when you're googling something you heard about at work a few hours ago, or on Bloomberg TV
You might as well say that DeepSeek wanted to cause confusion with DeepMind. Deep isn't such a distinguishing name, deep learning has been a buzzword since 2012.
This is so lame. This feels like another desperate attempt to stay relevant cobbled together after the DeepSeek announcement last week. What was the other attempt they made? Skip a version number to seem like more progress was made (o1->o3)? From what I can tell "o3" is just the same as o1 with an extra reasoning-effort parameter.
Oh and "Deep research" is available to people on the $200 per month plan? Lol - cool. I've been using DeepSeek a lot more recently and it's so incredibly good even with all the scaling issues.
Surprised more comments aren't mentioning deepseek has this feature (for free) already. Assuming this is why OpenAI scrambled to release it.
The examples they have on the page work well on chat.deepseek.com with r1 and search options both enabled.
Do I blindly trust the accuracy of either though? Absolutely not. I'm pretty concerned about these models falling into gaming SEO and finding inaccurate facts and presenting them as fact. (How easy is it to fool / prompt inject these models?)
Not really accurate. The "Search" functionality you're describing in DeepSeek is comparable to OpenAI's existing "Search GPT." OpenAI's recent announcement refers to a more advanced capability, similar to Gemini's existing "deep research" feature. DeepSeek's current offerings are significantly more limited in scope.
Doesn't seem like access is available to try "deep research" yet on OpenAI, so I can only speak to what I tried, which was their examples on the blog post (using DeepSeek w/ R1 + Search) and results were pretty similar.
AFAIK OpenAI's current offering uses 4o, and it does a web search and then pipes it into 4o. I'm guessing adding CoT + other R1/o3 like stuff is one of the key effective differences. But time will tell how different it is. Maybe it's a dramatic improvement.
Are you unaware that there is a "Deepthink (R1)" button right next to the "Search" button on DeepSeek's Chat app. Its been there for some time, even before all the hype regarding R1.
I wish Kagi would work with similar performance. Their lenses feature is perfect for this and they already filter out most of the SEO spam based on trackers and other typical red flags.
Can it compile and run (non-Python) code as part of its tool use? Compile-run steps always seemed like they would be a huge value add during reasoning loops - it feels very silly to get output from ChatGPT, try to run it in terminal, get an error and paste the error to have ChatGPT immediately fix it. Surely it should be able to run code during the reasoning loop itself?
It sounds like it can run Python, which means it has access to Code Interpreter, which means it can run various other languages as well if you can convince it to do so.
Not quite the agent they are building but I have an open source alternative that lets you use a variety of models, based on links of your choice to generate reports: https://github.com/btahir/open-deep-research
> In Nature journal's Scientific Reports conference proceedings from 2012, in the article that did not mention plasmons or plasmonics, what nano-compound is studied?
Aren't there more than one articles that did not mention plasmons or plasmonics in Scientific Reports in 2012?
Also, did they pay for access to all journal contents? that would be useful
So much cynicism and hate in these comments, especially as we are likely witnessing AGI come to life. Its still early, but it might be coming. Where is the excitement? This is an interesting time to be alive.
HN has a huge cultural problem that makes this website almost irrelevant. All the interesting takes have moved to X/twitter
We're looking at trends that may well obliterate the economic value of a well trained human mind sitting behind a keyboard all day. That is a bit of a threat to most people on HN if the trending continues at the current rate and direction.
“May you live in interesting times” is usually taken as a curse. ;)
More seriously, it’s unclear why one should be excited by the prospect of AGI, especially when instrumentalized by corporations and authoritarian governments.
AGI aside, sometimes HN critics/cynicism indeed points out the exact reason why something wouldn't work and is vindicated after the fact, e.g. Apple Vision Pro. I guess it's just hard to predict the future and for me, it's interesting to listen to even pure contrarians.
> "So much cynicism and hate in these comments, especially as we are likely witnessing AGI come to life. Its still early, but it might be coming. Where is the excitement? This is an interesting time to be alive."
Maybe you can define what "AGI" really means and what the end-game and the economic implications are when 'AGI" is some-what achieved? OpenAI somehow believes that they haven't achieved "AGI" yet, which they continue to do this on purpose for obvious reasons.
The first hint I will give you is that it certainly won't be a utopia.
I would be more excited if it wasn't $200 a month to try.
I don't feel like OpenAI does a good job of getting me excited either.
Find the perfect snowboard? How can that idea get pitched and make the final cut for a $200 a month service? The NFL kicker example is also completely ridiculous.
The business and UX example seems interesting. Would love to see more.
Setting aside how well it works, I think this is a pretty nice demonstration of how to do UX for an agentic RAG app. I like that the intermediate steps have been pushed out to a sidebar, with updates that both provide some transparency about the process and make the high latency more palatable.
I remember about 10-15 years ago that Ray Kurzweil (who still works at Google) or someone at Google had this idea for what Google should be able to do: About doing deep research by itself with a simple search query. I can't find the source. Obviously it didn't pan out without transformers.
I think deep research as a service could be a really strong use case for enterprises, as long as they have access to non-public data. I assume that most of this guarded data is high quality, and seeing progress in these areas might end up being even more impressive than it is now.
I’m a researcher and honestly not worried. 1. Developing the right question has always been the largest barrier to great research. Not sure OpenAI can develop the right question without the Human experience. The second biggest part of my role is influencing people that my questions are the right questions. Which is made easier when you have a thorough understanding of the first. That being said, I’m sure there will be many people here that will tell me that algorithms already influence people, and ai can think through much of any issues there are.
I do use these systems from time to time, but it just never renders any specific information that would make it great research.
These systems serve best at augmenting information discovery. When I'm tackling a new area or looking for the right terminology, these models provide a quick shortcut because they have good probabilistic "understanding" of my naive, jargon-free description. This allows me to pull in all of the jargon for the area of research I'm interested in, and move on to actually useful resources, whether that be journal articles, textbooks, or - rarely - online posts/blogs/videos.
the current "meta" is probably something like Elicit + notebookLM + Claude for accelerating understanding of complex topics and extracting useful parts. But, again, each step requires that I am closely involved, from selecting the "correct" papers, to carefully aggregating and grooming the information pulled in from notebookLM, to judging the the usefulness of Claude's attempts to extract what I have asked for
"will find, analyze, and synthesize hundreds of online sources"
Synthesize? Seems like the wrong word -- I think they would want to say something like, "analyze, and synthesize useful outputs from hundreds of online sources"..
> combine (a number of things) into a coherent whole: pupils should synthesize the data they have gathered | Darwinian theory has been synthesized with modern genetics.
To anyone who's tried it: how does it handle captchas? I can't imagine that OpenAI's IP addresses are anyone's favorites for unfettered access to web properties these days.
You can buy residential proxies to pretend you're a regular person IIRC, some of the browser automation companies do that to bypass rate limiting, captchas, etc.
When I saw new to llms, I used Bing ai in a fun way. So when I was writing my report, it was sometimes hard to find discussions or material about a certain topic.
What I did was to ask Bing ai about that topic and it returned information aswell as sources to where it found those, so I picked up all those links and researched them myself.
Bing ai was a great resource for finding relevant links, this was until I found out about perplexity, my life haven't been the same since.
Actually sounds pretty cool, but the graph on expert level tasks is confusing my expectations. Saying it has a pass rate of less than 20% sounds a lot like saying this thing is wrong most of the time.
Granted, these strike me as difficult tasks and I’d likely ask it to do far simpler things, but I’m not really sure what to expect from looking at these graphs.
Ah, but the fact that it bothers to cite its sources is a huge plus. Between that and its search abilities it sounds valuable to me
I think that's mostly because of the access to information it has. Much of the highly useful information is not on the public internet or shows up on search engines, only domain experts know about them. Also, the websites may be paywalled or gated by login. So a better comparison would be if the models had the same level of access as an expert.
From the demo: “Use bullets and tables where necessary for clarity.” It’s weird that it would be necessary to specify that. I suppose they want to showcase that you can influence the output style, but it’s strange that you’d have to explicitly specify the use of something that is “necessary for clarity”. It comes across as either a flaw in the default execution, or as a merely performative incantation.
They reduce one expensive but mostly useless gimmick after the other.
No one if ever going to trust the output this generates and then needs to spend so much time fact checking that they might as well do the entire research from scratch themselves...
Feels more and more like openAI doesn't have "that next big thing".
To be clear I'm constantly impressed with what they have and what I get as a customer, but the delivery since 4 hasn't exactly been in line with Altman's Musk-tier vapoware promises...
This make sense, I often use the normal search feature to research a very large ammount of information and it mostly does not work well. If the new search feature increases the number of websites scrapped and the pertinence of the websites, I'm all in.
Is there a benchmark we can compare this against You.com's research mode? It looks like R1 forced them to release o3 prematurely and give it Internet access. And they didn't want to say they released o3 so they called it 'Deep Research'.
"synthesize large amounts of online information" does it heavily depend on the search engine performance and relevance of the search results? I don't see any mention of Google or Bing. Is this using their internal search engine then?
Eh, not really. Google failed to launch first out of internal political dysfunction and then made a crash effort to launch something to counter the first ChatGPT release.
I highly doubt that the concerns of internal political commissars were holding up this particular openai release.
What a decent setup to replicate via open model and agent framework? One thing I have struggled with is getting comprehensive web searches using an agentic framework.
I don't know. OpenAI is so bad in naming... the average person on the street will confuse Deepseek with Deep Research. Also not to forget o1, o3 ... 4o
The demo on global e-commerce trends seems less useful than a Google search, where the AI answer will at least give you links to the claimed information.
The OpenAI Deep Research graph showing tool calls vs pass rate reveals something fascinating about how these models handle increasing amounts of information. The relationship follows a logistic curve that plateaus around 16% pass rate, even as we allow more tool calls.
This plateau behavior reflects something deeper about our current approach to AI. We've built transformer architectures partly inspired by simplified observations of human cognition - particularly how our brains use attention mechanisms to filter and process information. And like human attention, these models have inherent constraints: each attention layer normalizes scores to sum to 1, creating a fixed "attention budget" that must be distributed across all inputs.
A recent paper (https://arxiv.org/abs/2501.19399) explores this limitation, showing how standard attention becomes increasingly diffuse with longer contexts. Their proposed "Scalable-Softmax" helps maintain focused attention at longer ranges, but still shows diminishing returns - pushing the ceiling higher rather than eliminating it.
But here's the deeper question: As we push toward AGI and potentially superintelligent systems, should we remain bound by architectures modeled on our current understanding of human cognition? The human brain's limited attention mechanism evolved under specific constraints and for specific purposes. While it's remarkably effective for human-level intelligence, it might be fundamentally limiting for artificial systems that could theoretically process information in radically different ways.
Looking at the Deep Research results through this lens, the plateau might not just be a technical limitation to overcome, but a sign that we need to fundamentally rethink how artificial systems could process and integrate information. Instead of trying to stretch the capabilities of attention-based architectures, perhaps we need to explore entirely different paradigms of information processing that aren't constrained by biological analogues.
This isn't to dismiss the remarkable achievements of transformer architectures, but rather to suggest that the path to AGI might require breaking free from some of our biologically-inspired assumptions. What would an architecture that processes information in ways fundamentally different from human cognition look like? How might it integrate and reason about information without the constraints of normalized attention?
Would love to hear thoughts from others working on these problems, particularly around novel approaches that move beyond our current biological inspirations.
No, it just suggests that RL was used over a base SFT model, and moreover that RL here was tuned to this research task. Personally I don't think that RL is strictly necessary for this task at all, but perhaps it helps.
What is the current state of DSPy optimizers? When I originally checked it out it appeared to just be optimizing the set of examples used for n-shot prompting.
I love that when "open"ai releases things last year or so, they do not actually release them. So we get the chance in the meantime to all enjoy a bunch of speculative, shilling comments here about this next great thing being miles ahead of competitors/close to AGI/the tool that will actually do X thing that others complain so far llms are failing to do.
I have the $200/month version. Deep Research arrived this morning.
So far I tried it on one problem and it seems limited by the "front end" being 4o-mini. It ignored most of my initial prompt and also ignored the previous research it asked for which I provided. The final output was high quality and definitely was enriched by the web searching it did, but it left out a crucially important dimension of the problem because it was unable to ingest the background info I provided adequately.
I'd like to see a version of it where the front end model is o1-pro
I really don't like the snarky tone of the parent comment.
Nonetheless, I don't think this is even something that can easily be benchmarked. I'd recommend you take a look at aider [1], and consider how I drew similarities between it and what's presented here.
Has ClosedAI presented any benchmarks / evaluation protocols?
I don’t think you actually read it. The benchmarks are in reference to the model that’s underlying deep-research, and not deep-research itself. For the latter, they have anecdata from scientists.
What does that even mean? Treating each iterative model as a new product is not any different than Google changing its search or youtube recommendation algorithm.
Different pre-cooked prompts and filters don’t really amount to new products either, despite them being marketed as such. It’s like adobe treating each tool in photoshop as its own product.
Sure if you're viewing this as some kind of spectator thing, or entertainment, maybe it's less interesting. But it doesn't really matter whether "people care". What matters is whether it's useful and has impact. It's enough if the small number of people use it for whom it is useful. It doesn't matter if the average Joe on the street is excited by it.
Few people care or even know about various advances in various specialized fields. It's enough if AI simply seeps into various applications in boring and non-flashy ways for it to have significant effects that will affect a wider range of people, whether they get hyped by the news announcements or not. Jobs etc.
An analogy: the Internet as such is not very exciting nowadays, certainly not in the way it was exciting in the 90s with all the news segments about surfing the information superhighway or whatever. There was a lot of buzz around the web, but then it got normalized. It didn't disappear, it just got taken for granted. No average person got excited around HTML5 or IPv6. It just chugs along in the background. AI will similarly simply build into the fabric of how things get done. Sometimes visibly to the average person, sometimes just behind the scenes.
Not sure if it's just me, but it looks like all SOTA companies are doubling down to chase the new benchmark, which beyond hype, doesn't seem to translate into real world uses. Why don't these companies just plug it into a popular git repo and say, hey our AI fixed these 100 issues! Or something real? The only people who seem to be doing something real is DeepMind.
Especially this is not a breakthrough justifying a 340B USD valuation, but rather the work that junior developers can do; implement a loop of Bing Searches connected to an LLM.
If they "knew the questions in advance," why'd they need Internet access at all? The ability to use the same data sources humans would use is not the insult you seem to think it is.
Again: the assertion was yours, so let us know the results of your own work.
I haven’t tried the OpenAI version yet, as I’m on their peasant-level $20 plan, but the Google equivalent is way superior to Perplexity (I use both extensively). The web search Perplexity carries out is superficial compared to the Google product; it misses a large percentage of what Gemini Deep Research finds, and for a particular task in my business this makes a huge difference.
Late Sunday night, I gained access to OpenAI’s newly launched Deep Research and immediately tested it on a draft blog post about Uniform Electronic Transactions Act (UETA) compliance and AI-agent error handling [1]. Here’s what I found:
Within minutes, it generated a detailed, well-cited research report that significantly expanded my original analysis, covering:
* Legal precedents & case law interpretations (including a nuanced breakdown of UETA Section 10).
* Comparative international frameworks (EU, UK, Canada).
* Real-world technical implementations (Stripe’s AI-driven transaction handling).
* Industry perspectives & business impact (trust, risk allocation, compliance).
* Emerging regulatory standards (EU AI Act, FTC oversight, ISO/NIST AI governance).
What stood out most was its ability to:
- Synthesize complex legal, business, and technical concepts into clear, actionable insights.
- Connect legal frameworks, industry trends, and real-world case studies.
- Maintain a business-first focus, emphasizing practical benefits.
- Integrate 2024 developments with historical context for a deeper analysis.
The depth and coherence of the output were comparable to what I would expect from a team of domain experts—but delivered in a fraction of the time.
From the announcement: Deep Research leverages OpenAI’s next-generation model, optimized for multi-step research, reasoning, and synthesis. It has already set new performance benchmarks, achieving 26.6% accuracy on Humanity’s Last Exam (the highest of any OpenAI model) and a 72.57% average accuracy on the GAIA Benchmark, demonstrating advanced reasoning and research capabilities.
Currently available to Pro users (with up to 100 queries per month), it will soon expand to Plus and Team users. While OpenAI acknowledges limitations—such as occasional hallucinations and challenges in source verification—its iterative deployment strategy and continuous refinement approach are promising.
My key takeaway: This LLM agent-based tool has the potential to save hours of manual research while delivering high-quality, well-documented outputs. Automating tasks that traditionally require expert-level investigation, it can complete complex research in 5–30 minutes (just 6 minutes for my task), with citations and structured reasoning.
I don’t see any other comments yet from people who have actually used it, but it’s only been a few hours.I’d love to hear how it’s performing for others. What use cases have you explored? How did it do?
(Note: This review is based on a single use case. I’ll provide further updates as I conduct broader testing.)
The second was a task to do an industry analysis on a space in which I worked for about ten years. I think its overall synthesis was good (it accorded with my understanding of the space), but there were a number of errors in the statistics and supporting evidence it compiled, based upon my random review of the source material.
I think the product is cool and will definitely be helpful, but I would still recommend verifying its outputs. I think the process of verification is less time-consuming than the process of researching and writing, so that is likely an acceptable compromise in many cases.
Ok so I do this as a noob in some field. How do I know or trust the research conclusions? How do I know it’s not hallucinated its conclusions? I’ll likely have to do my own research to just verify it and then if I did I might as well have done the research myself.
What a coincident of releasing deep research for your product when one of your main competitors has DeepSeek R1 as their best performant version /s
Seriously, for the past 20+ years it's hard to imagine doing research without Google platform namely Google Search, Scholar, Patent and Book, but now it seems agent AI based on LLM is the way to. In twenty years in the future it will be hard to imagine that doing research without them. But as many people already pointed out Google probably the best company by far to perform this emerging AI based research. In data eco-system terms (refer to any book on data engineering), Google has already perform has the most important data preparation and data engineering upstream activities including data ingestion and transformation. Now given their vast amount of processed data they can just serve it to downstream data analytics or AI for performing research with minimum error/hallucinations as possible. According to Google there is no moat for any companies against open source LLM, but if any company that can has the moat it will be Google itself.
I see lots of warranted skepticism about the capabilities of this tool, but the reality is that this is an incremental step toward full automation of white collar labor. No, it will not make all analysts jobless overnight. But it may reduce hiring of said people by 5 or 10 percent. And as people get better at using the tool and the tool itself gets better, those numbers will grow. Remember that it took decades for the giant pool of typing secretaries in Mad Men to disappear, but they did disappear. Gone forever. Interestingly, anger about the diminishment of secretarial male white collar work in Germany due to the spread of the typewriter a few decades earlier was one of the drivers of the Nazi Party’s popularity (see Evans, the Rise of the Third Reich).
AI’s triumph in the white collar workplace will be gradual, not instantaneous. And it will be grimly quiet, because no one likes white collar workers the way they like blue collar workers, for some odd reason, and there’s no tradition of solidarity among white collar workers. Everyone will just look up one day and find that the local Big Corp headquarters is…empty.
I'm sorry but what the fuck is this product pitch?
Anyone who's done any kind of substantial document research knows that it's a NIGHTMARE of chasing loose ends & citogenesis.
Trusting an LLM to critically evaluate every source and to be deeply suspect of any unproven claim is a ridiculous thing to do. These are not hard reasoning systems, they are probabilistic language models.
o1 and o3 are definitely not your run of the mill LLM. I've had o1 correct my logic, and it had correct math to back up why I was wrong. I'm very skeptical, but I do think at some point AI is going to be able to do this sort of thing.
I’m not sure I understand what you mean by “the button”. If you’re comparing this to DeepSeek’s copying, it’s not really the same thing right? DeepSeek essentially stole intellectual property by violating OpenAI’s terms of service. As I understand it, this is a copy of Google’s Deep Research
Deepseek proved that there is no moat. Thus no path to profitability for openai, anthropic & co.
Stealing from thieves is fine by me. Sama was the one claiming that all information could be used to train LLMs, without permisdion of the copyright holders.
Now the same is being done to openai. Well, too bad.
> Stealing from thieves is fine by me. Sama was the one claiming that all information could be used to train LLMs, without permisdion of the copyright holders.
OpenAI and other LLMs scraping the internet is probably covered under fair use. DeepSeek’s violation of OpenAI’s terms is pretty clearly a violation of their terms and not legal.
Care to explain how something that cannot be copyrighted and was not generated by a human is “intellectual property“? Or are you just parroting a narrative?
Explain the trade secrets contained in non-copyrightable AI outputs and the reasonable efforts OpenAI takes to keep its AI output “secret”. Or are you confused about what a “trade secret” actually is?
Yes those cases will be interesting. By default a lot of copyrighted content may be legal to use for training (in the US but also many other places) under what’s called fair use. The cases you’re referring to will likely reinforce this, but it isn’t known yet. Note that it’s not just OpenAI on that side of the argument but also other (non tech) organizations that believe protecting fair use here is current law and essential.
Yea but guy paying closedai to get "insights" that basically copy-pasted content from my blog is definitely violating my blogs copyright, and in the end no coin comes to me either. What about that?
Could you provide an example where OpenAI outputting verbatim quotes actually constitutes the copyright violation? Because mechanically retrieving relevant quotes seems analogous to grep/search - the copyright status would depend on how downstream users transform and use that content. Like how quoting your blog in a technical analysis or critique is fair use, but wholesale republishing isn't. This suggests the violation occurs at usage time, not retrieval time.
I see many are offended, but I am genuinely asking a question.
I want to understand does this mean it's ethical for anyone to create a research AI tool that will go through arXiv and related GitHub repo and use it to solve problems, implement ideas like cursor.
Each release from openAI gives me less hope for them and this whole AI boom. They should be leading the charge of highlighting how the current generation of LLMs fail, not churning out half-baked overhyped products.
Yes, they can do some cool tricks, and tool calling is fun. No one should trust the output of these models, though. The hallucinations are bad, and my experience with the "reasoning" models is that as soon as they fuck up (they always do) they go off the rails worse than the base LLMs.
It appears that OpenAI is in panic mode after the release of DeepSeek. Before they were confident in competing against Google on any AI model they release.
Now they are scrambling against open-source after their disastrous operator demonstration and using this deep research demo as cover. Nothing that Google or Perplexity could not already do themselves.
By the end of them month, this feature is going be added by a bunch of other open-source projects and this feature won't be as interesting very quickly.
I don’t think you’re comparing the right things here. This feature is more like Google’s Deep Research, which basically goes off and does a whole lot of search and compute to produce something more like a full research report. This has nothing to do with open weight models like DeepSeek (note: DeepSeek, Llama, etc are NOT open source). This feature doesn’t just require the research on the model but also enormous compute. Plus anyone using such a feature for real work is not going to be using DeepSeek or whatever, but a product with trustworthy practices and guarantees.
> This feature is more like Google’s Deep Research, which basically goes off and does a whole lot of search and compute to produce something more like a full research report.
Of course. It is in response to their disastrous operator demo which did not justify the $200 per month ChatGPT Pro subscription on top of the release of DeepSeek to make matters worse for them.
> This has nothing to do with open weight models like DeepSeek (note: DeepSeek, Llama, etc are NOT open source).
It obviously does. Even before they rushed this presentation, they made o3-mini available for ChatGPT free users so it in direct response to DeepSeek.
> This feature doesn’t just require the research on the model but also enormous compute. Plus anyone using such a feature for real work is not going to be using DeepSeek or whatever, but a product with trustworthy practices and guarantees.
Nothing that Perplexity + DeepSeek-R1 can already do.
OpenAI is very much in an existential crisis and their poor execution is not helping their cause. Operator or “deep research” should be able to assume the role of a Pro user, run a quick test, and reliably report on whether this is working before the press release right?
man you work for high flyer or something? i know that's not really a fair question but oai still seems to lead the pack. i know it's a hype-y area but responding to one (1) model that's comparable to o4 but cheaper with "guys it's so over for openai" is excessive.
Appreciate the thoughtful engagement. I work for a large, US-based investment firm. No relationship w/ High Flyer.
This isn’t a single model. Almost the entire leadership team around sama has left and almost certainly agrees with me on this. OpenAI’s business model is not sustainable.
can sometimes hallucinate facts in responses or make incorrect inferences, though at a notably lower rate than existing ChatGPT models, according to internal evaluations. It may struggle with distinguishing authoritative information from rumors, and currently shows weakness in confidence calibration, often failing to convey uncertainty accurately
Taken from the limitations section.
These tools are just good at creating pollution. I don't see the point of delegating a (not just) research where 1% blatant mistakes are acceptable. These need much better grounding before handing out to masses.
I can not take any output by these tools (google summaries, comment summaries by amazon, youtube summaries etc etc) while knowing for a fact some of that is a total lie. I can not tell which part is a lie. e.g. If LLM says that in any given text the sentiment is divided, it could be just one person with an opposing view.
If same task was given to a person, I could reason with that person on any conclusion. These tools will reason on their hallucinations.
It stated that I had 47,000 reputation points on Stack Overflow -- quite a surprise to me, given my minimal activity on Stack Overflow over the years. I popped over to the link it had cited (my profile on Stack Overflow) and it seems it confused my number of people reached (47k) with my reputation, a sadly paltry 525.
Then it cited an answer I gave on Stack Overflow on the topic of monkey-patching in PHP, using this as evidence for my technical expertise. Turns out that about 15 years ago, I _asked_ a question on this topic, but the answer was submitted by someone else. Looks like I don't have much expertise, after all.
Finally, it found a gem of a quote from an interview I gave. Or wait, that was my brother! Confusingly, we founded a company together, and we were both mentioned in the same article, but he was the interviewee, not I.
I would say it's decent enough for a springboard, but you should definitely treat the output with caution and follow the links provided to make sure everything is accurate.
reply